I0415 23:36:25.051047 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0415 23:36:25.051278 7 e2e.go:124] Starting e2e run "45e19f9f-407c-45a0-8758-e966c88e9b23" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586993783 - Will randomize all specs Will run 275 of 4992 specs Apr 15 23:36:25.102: INFO: >>> kubeConfig: /root/.kube/config Apr 15 23:36:25.104: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 15 23:36:25.125: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 15 23:36:25.156: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 15 23:36:25.156: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 15 23:36:25.156: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 15 23:36:25.163: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 15 23:36:25.163: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 15 23:36:25.163: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 15 23:36:25.164: INFO: kube-apiserver version: v1.17.0 Apr 15 23:36:25.165: INFO: >>> kubeConfig: /root/.kube/config Apr 15 23:36:25.170: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:36:25.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Apr 15 23:36:25.241: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-4d6e189b-02ef-4358-953a-27ee01506779 STEP: Creating a pod to test consume configMaps Apr 15 23:36:25.252: INFO: Waiting up to 5m0s for pod "pod-configmaps-40d77315-0e21-442a-84c3-5b4322d6fa09" in namespace "configmap-9886" to be "Succeeded or Failed" Apr 15 23:36:25.256: INFO: Pod "pod-configmaps-40d77315-0e21-442a-84c3-5b4322d6fa09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.365681ms Apr 15 23:36:27.260: INFO: Pod "pod-configmaps-40d77315-0e21-442a-84c3-5b4322d6fa09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008134409s Apr 15 23:36:29.264: INFO: Pod "pod-configmaps-40d77315-0e21-442a-84c3-5b4322d6fa09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012545153s STEP: Saw pod success Apr 15 23:36:29.264: INFO: Pod "pod-configmaps-40d77315-0e21-442a-84c3-5b4322d6fa09" satisfied condition "Succeeded or Failed" Apr 15 23:36:29.268: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-40d77315-0e21-442a-84c3-5b4322d6fa09 container configmap-volume-test: STEP: delete the pod Apr 15 23:36:29.312: INFO: Waiting for pod pod-configmaps-40d77315-0e21-442a-84c3-5b4322d6fa09 to disappear Apr 15 23:36:29.337: INFO: Pod pod-configmaps-40d77315-0e21-442a-84c3-5b4322d6fa09 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:36:29.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9886" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":72,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:36:29.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 15 23:36:30.855: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 15 23:36:33.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590590, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590590, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590590, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590590, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 15 23:36:35.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590590, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590590, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590590, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590590, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 15 23:36:38.132: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:36:38.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4454" for this suite. STEP: Destroying namespace "webhook-4454-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.313 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":2,"skipped":89,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:36:38.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2454 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 15 23:36:38.743: INFO: Found 0 stateful pods, waiting for 3 Apr 15 23:36:48.787: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 15 23:36:48.787: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 15 23:36:48.787: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 15 23:36:58.748: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 15 23:36:58.748: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 15 23:36:58.748: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 15 23:36:58.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2454 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 15 23:37:01.161: INFO: stderr: "I0415 23:37:01.034145 31 log.go:172] (0xc000876a50) (0xc0006357c0) Create stream\nI0415 23:37:01.034220 31 log.go:172] (0xc000876a50) (0xc0006357c0) Stream added, broadcasting: 1\nI0415 23:37:01.037618 31 log.go:172] (0xc000876a50) Reply frame received for 1\nI0415 23:37:01.037662 31 log.go:172] (0xc000876a50) (0xc000573720) Create stream\nI0415 23:37:01.037681 31 log.go:172] (0xc000876a50) (0xc000573720) Stream added, broadcasting: 3\nI0415 23:37:01.038797 31 log.go:172] (0xc000876a50) Reply frame received for 3\nI0415 23:37:01.038836 31 log.go:172] (0xc000876a50) (0xc00031ab40) Create stream\nI0415 23:37:01.038854 31 log.go:172] (0xc000876a50) (0xc00031ab40) Stream added, broadcasting: 5\nI0415 23:37:01.039937 31 log.go:172] (0xc000876a50) Reply frame received for 5\nI0415 23:37:01.129723 31 log.go:172] (0xc000876a50) Data frame received for 5\nI0415 23:37:01.129752 31 log.go:172] (0xc00031ab40) (5) Data frame handling\nI0415 23:37:01.129785 31 log.go:172] (0xc00031ab40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0415 23:37:01.152635 31 log.go:172] (0xc000876a50) Data frame received for 3\nI0415 23:37:01.152659 31 log.go:172] (0xc000573720) (3) Data frame handling\nI0415 23:37:01.152674 31 log.go:172] (0xc000573720) (3) Data frame sent\nI0415 23:37:01.152681 31 log.go:172] (0xc000876a50) Data frame received for 3\nI0415 23:37:01.152688 31 log.go:172] (0xc000573720) (3) Data frame handling\nI0415 23:37:01.153220 31 log.go:172] (0xc000876a50) Data frame received for 5\nI0415 23:37:01.153246 31 log.go:172] (0xc00031ab40) (5) Data frame handling\nI0415 23:37:01.156037 31 log.go:172] (0xc000876a50) Data frame received for 1\nI0415 23:37:01.156057 31 log.go:172] (0xc0006357c0) (1) Data frame handling\nI0415 23:37:01.156075 31 log.go:172] (0xc0006357c0) (1) Data frame sent\nI0415 23:37:01.156085 31 log.go:172] (0xc000876a50) (0xc0006357c0) Stream removed, broadcasting: 1\nI0415 23:37:01.156188 31 log.go:172] (0xc000876a50) Go away received\nI0415 23:37:01.156400 31 log.go:172] (0xc000876a50) (0xc0006357c0) Stream removed, broadcasting: 1\nI0415 23:37:01.156419 31 log.go:172] (0xc000876a50) (0xc000573720) Stream removed, broadcasting: 3\nI0415 23:37:01.156432 31 log.go:172] (0xc000876a50) (0xc00031ab40) Stream removed, broadcasting: 5\n" Apr 15 23:37:01.161: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 15 23:37:01.161: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 15 23:37:11.193: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 15 23:37:21.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2454 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:37:21.457: INFO: stderr: "I0415 23:37:21.359623 65 log.go:172] (0xc0008960b0) (0xc0006cb360) Create stream\nI0415 23:37:21.359678 65 log.go:172] (0xc0008960b0) (0xc0006cb360) Stream added, broadcasting: 1\nI0415 23:37:21.362350 65 log.go:172] (0xc0008960b0) Reply frame received for 1\nI0415 23:37:21.362398 65 log.go:172] (0xc0008960b0) (0xc000a94000) Create stream\nI0415 23:37:21.362411 65 log.go:172] (0xc0008960b0) (0xc000a94000) Stream added, broadcasting: 3\nI0415 23:37:21.363533 65 log.go:172] (0xc0008960b0) Reply frame received for 3\nI0415 23:37:21.363594 65 log.go:172] (0xc0008960b0) (0xc0006cb540) Create stream\nI0415 23:37:21.363621 65 log.go:172] (0xc0008960b0) (0xc0006cb540) Stream added, broadcasting: 5\nI0415 23:37:21.364760 65 log.go:172] (0xc0008960b0) Reply frame received for 5\nI0415 23:37:21.450010 65 log.go:172] (0xc0008960b0) Data frame received for 5\nI0415 23:37:21.450061 65 log.go:172] (0xc0006cb540) (5) Data frame handling\nI0415 23:37:21.450077 65 log.go:172] (0xc0006cb540) (5) Data frame sent\nI0415 23:37:21.450088 65 log.go:172] (0xc0008960b0) Data frame received for 5\nI0415 23:37:21.450099 65 log.go:172] (0xc0006cb540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0415 23:37:21.450127 65 log.go:172] (0xc0008960b0) Data frame received for 3\nI0415 23:37:21.450142 65 log.go:172] (0xc000a94000) (3) Data frame handling\nI0415 23:37:21.450172 65 log.go:172] (0xc000a94000) (3) Data frame sent\nI0415 23:37:21.450189 65 log.go:172] (0xc0008960b0) Data frame received for 3\nI0415 23:37:21.450200 65 log.go:172] (0xc000a94000) (3) Data frame handling\nI0415 23:37:21.451341 65 log.go:172] (0xc0008960b0) Data frame received for 1\nI0415 23:37:21.451359 65 log.go:172] (0xc0006cb360) (1) Data frame handling\nI0415 23:37:21.451366 65 log.go:172] (0xc0006cb360) (1) Data frame sent\nI0415 23:37:21.451529 65 log.go:172] (0xc0008960b0) (0xc0006cb360) Stream removed, broadcasting: 1\nI0415 23:37:21.451917 65 log.go:172] (0xc0008960b0) (0xc0006cb360) Stream removed, broadcasting: 1\nI0415 23:37:21.451943 65 log.go:172] (0xc0008960b0) (0xc000a94000) Stream removed, broadcasting: 3\nI0415 23:37:21.451956 65 log.go:172] (0xc0008960b0) (0xc0006cb540) Stream removed, broadcasting: 5\n" Apr 15 23:37:21.457: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 15 23:37:21.457: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 15 23:37:31.485: INFO: Waiting for StatefulSet statefulset-2454/ss2 to complete update Apr 15 23:37:31.485: INFO: Waiting for Pod statefulset-2454/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 15 23:37:31.485: INFO: Waiting for Pod statefulset-2454/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 15 23:37:31.485: INFO: Waiting for Pod statefulset-2454/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 15 23:37:41.493: INFO: Waiting for StatefulSet statefulset-2454/ss2 to complete update Apr 15 23:37:41.493: INFO: Waiting for Pod statefulset-2454/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 15 23:37:41.493: INFO: Waiting for Pod statefulset-2454/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 15 23:37:51.494: INFO: Waiting for StatefulSet statefulset-2454/ss2 to complete update Apr 15 23:37:51.494: INFO: Waiting for Pod statefulset-2454/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 15 23:38:01.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2454 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 15 23:38:01.765: INFO: stderr: "I0415 23:38:01.631462 87 log.go:172] (0xc0009e86e0) (0xc00069f2c0) Create stream\nI0415 23:38:01.631523 87 log.go:172] (0xc0009e86e0) (0xc00069f2c0) Stream added, broadcasting: 1\nI0415 23:38:01.633812 87 log.go:172] (0xc0009e86e0) Reply frame received for 1\nI0415 23:38:01.633841 87 log.go:172] (0xc0009e86e0) (0xc00069f4a0) Create stream\nI0415 23:38:01.633851 87 log.go:172] (0xc0009e86e0) (0xc00069f4a0) Stream added, broadcasting: 3\nI0415 23:38:01.634848 87 log.go:172] (0xc0009e86e0) Reply frame received for 3\nI0415 23:38:01.634895 87 log.go:172] (0xc0009e86e0) (0xc0008e2000) Create stream\nI0415 23:38:01.634910 87 log.go:172] (0xc0009e86e0) (0xc0008e2000) Stream added, broadcasting: 5\nI0415 23:38:01.635924 87 log.go:172] (0xc0009e86e0) Reply frame received for 5\nI0415 23:38:01.728364 87 log.go:172] (0xc0009e86e0) Data frame received for 5\nI0415 23:38:01.728407 87 log.go:172] (0xc0008e2000) (5) Data frame handling\nI0415 23:38:01.728438 87 log.go:172] (0xc0008e2000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0415 23:38:01.756993 87 log.go:172] (0xc0009e86e0) Data frame received for 3\nI0415 23:38:01.757032 87 log.go:172] (0xc00069f4a0) (3) Data frame handling\nI0415 23:38:01.757088 87 log.go:172] (0xc00069f4a0) (3) Data frame sent\nI0415 23:38:01.757264 87 log.go:172] (0xc0009e86e0) Data frame received for 3\nI0415 23:38:01.757296 87 log.go:172] (0xc00069f4a0) (3) Data frame handling\nI0415 23:38:01.757551 87 log.go:172] (0xc0009e86e0) Data frame received for 5\nI0415 23:38:01.757574 87 log.go:172] (0xc0008e2000) (5) Data frame handling\nI0415 23:38:01.759804 87 log.go:172] (0xc0009e86e0) Data frame received for 1\nI0415 23:38:01.759836 87 log.go:172] (0xc00069f2c0) (1) Data frame handling\nI0415 23:38:01.759858 87 log.go:172] (0xc00069f2c0) (1) Data frame sent\nI0415 23:38:01.759881 87 log.go:172] (0xc0009e86e0) (0xc00069f2c0) Stream removed, broadcasting: 1\nI0415 23:38:01.759979 87 log.go:172] (0xc0009e86e0) Go away received\nI0415 23:38:01.760427 87 log.go:172] (0xc0009e86e0) (0xc00069f2c0) Stream removed, broadcasting: 1\nI0415 23:38:01.760450 87 log.go:172] (0xc0009e86e0) (0xc00069f4a0) Stream removed, broadcasting: 3\nI0415 23:38:01.760470 87 log.go:172] (0xc0009e86e0) (0xc0008e2000) Stream removed, broadcasting: 5\n" Apr 15 23:38:01.766: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 15 23:38:01.766: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 15 23:38:11.864: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 15 23:38:21.994: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2454 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:38:22.229: INFO: stderr: "I0415 23:38:22.122907 107 log.go:172] (0xc000956420) (0xc00044e0a0) Create stream\nI0415 23:38:22.122961 107 log.go:172] (0xc000956420) (0xc00044e0a0) Stream added, broadcasting: 1\nI0415 23:38:22.125793 107 log.go:172] (0xc000956420) Reply frame received for 1\nI0415 23:38:22.125827 107 log.go:172] (0xc000956420) (0xc00044e140) Create stream\nI0415 23:38:22.125838 107 log.go:172] (0xc000956420) (0xc00044e140) Stream added, broadcasting: 3\nI0415 23:38:22.126877 107 log.go:172] (0xc000956420) Reply frame received for 3\nI0415 23:38:22.126931 107 log.go:172] (0xc000956420) (0xc0007ad360) Create stream\nI0415 23:38:22.126944 107 log.go:172] (0xc000956420) (0xc0007ad360) Stream added, broadcasting: 5\nI0415 23:38:22.127858 107 log.go:172] (0xc000956420) Reply frame received for 5\nI0415 23:38:22.222065 107 log.go:172] (0xc000956420) Data frame received for 5\nI0415 23:38:22.222113 107 log.go:172] (0xc0007ad360) (5) Data frame handling\nI0415 23:38:22.222137 107 log.go:172] (0xc0007ad360) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0415 23:38:22.222158 107 log.go:172] (0xc000956420) Data frame received for 5\nI0415 23:38:22.222208 107 log.go:172] (0xc0007ad360) (5) Data frame handling\nI0415 23:38:22.222249 107 log.go:172] (0xc000956420) Data frame received for 3\nI0415 23:38:22.222261 107 log.go:172] (0xc00044e140) (3) Data frame handling\nI0415 23:38:22.222273 107 log.go:172] (0xc00044e140) (3) Data frame sent\nI0415 23:38:22.222284 107 log.go:172] (0xc000956420) Data frame received for 3\nI0415 23:38:22.222299 107 log.go:172] (0xc00044e140) (3) Data frame handling\nI0415 23:38:22.223668 107 log.go:172] (0xc000956420) Data frame received for 1\nI0415 23:38:22.223752 107 log.go:172] (0xc00044e0a0) (1) Data frame handling\nI0415 23:38:22.223822 107 log.go:172] (0xc00044e0a0) (1) Data frame sent\nI0415 23:38:22.223866 107 log.go:172] (0xc000956420) (0xc00044e0a0) Stream removed, broadcasting: 1\nI0415 23:38:22.223891 107 log.go:172] (0xc000956420) Go away received\nI0415 23:38:22.224369 107 log.go:172] (0xc000956420) (0xc00044e0a0) Stream removed, broadcasting: 1\nI0415 23:38:22.224395 107 log.go:172] (0xc000956420) (0xc00044e140) Stream removed, broadcasting: 3\nI0415 23:38:22.224412 107 log.go:172] (0xc000956420) (0xc0007ad360) Stream removed, broadcasting: 5\n" Apr 15 23:38:22.229: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 15 23:38:22.229: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 15 23:38:32.251: INFO: Waiting for StatefulSet statefulset-2454/ss2 to complete update Apr 15 23:38:32.252: INFO: Waiting for Pod statefulset-2454/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 15 23:38:32.252: INFO: Waiting for Pod statefulset-2454/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 15 23:38:32.252: INFO: Waiting for Pod statefulset-2454/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 15 23:38:42.259: INFO: Waiting for StatefulSet statefulset-2454/ss2 to complete update Apr 15 23:38:42.259: INFO: Waiting for Pod statefulset-2454/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 15 23:38:42.259: INFO: Waiting for Pod statefulset-2454/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 15 23:38:52.260: INFO: Waiting for StatefulSet statefulset-2454/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 15 23:39:02.260: INFO: Deleting all statefulset in ns statefulset-2454 Apr 15 23:39:02.263: INFO: Scaling statefulset ss2 to 0 Apr 15 23:39:22.284: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 23:39:22.286: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:39:22.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2454" for this suite. • [SLOW TEST:163.659 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":3,"skipped":129,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:39:22.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 15 23:39:22.466: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60a83614-b9d3-48e8-b8e7-2bf9a23bd4cc" in namespace "projected-5383" to be "Succeeded or Failed" Apr 15 23:39:22.482: INFO: Pod "downwardapi-volume-60a83614-b9d3-48e8-b8e7-2bf9a23bd4cc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.011068ms Apr 15 23:39:24.489: INFO: Pod "downwardapi-volume-60a83614-b9d3-48e8-b8e7-2bf9a23bd4cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023109986s Apr 15 23:39:26.922: INFO: Pod "downwardapi-volume-60a83614-b9d3-48e8-b8e7-2bf9a23bd4cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.455729145s STEP: Saw pod success Apr 15 23:39:26.922: INFO: Pod "downwardapi-volume-60a83614-b9d3-48e8-b8e7-2bf9a23bd4cc" satisfied condition "Succeeded or Failed" Apr 15 23:39:26.944: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-60a83614-b9d3-48e8-b8e7-2bf9a23bd4cc container client-container: STEP: delete the pod Apr 15 23:39:27.137: INFO: Waiting for pod downwardapi-volume-60a83614-b9d3-48e8-b8e7-2bf9a23bd4cc to disappear Apr 15 23:39:27.158: INFO: Pod downwardapi-volume-60a83614-b9d3-48e8-b8e7-2bf9a23bd4cc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:39:27.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5383" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":133,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:39:27.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 15 23:39:31.249: INFO: Pod pod-hostip-12722e46-7781-4195-8404-381ad7828c64 has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:39:31.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-20" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":140,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:39:31.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-afc4a311-2cda-44be-8170-41aa74c171ec STEP: Creating a pod to test consume configMaps Apr 15 23:39:31.371: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-17929312-6b66-42b8-a75d-2079e5e74c95" in namespace "projected-7366" to be "Succeeded or Failed" Apr 15 23:39:31.380: INFO: Pod "pod-projected-configmaps-17929312-6b66-42b8-a75d-2079e5e74c95": Phase="Pending", Reason="", readiness=false. Elapsed: 8.470492ms Apr 15 23:39:33.394: INFO: Pod "pod-projected-configmaps-17929312-6b66-42b8-a75d-2079e5e74c95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022411231s Apr 15 23:39:35.398: INFO: Pod "pod-projected-configmaps-17929312-6b66-42b8-a75d-2079e5e74c95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026156448s STEP: Saw pod success Apr 15 23:39:35.398: INFO: Pod "pod-projected-configmaps-17929312-6b66-42b8-a75d-2079e5e74c95" satisfied condition "Succeeded or Failed" Apr 15 23:39:35.400: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-17929312-6b66-42b8-a75d-2079e5e74c95 container projected-configmap-volume-test: STEP: delete the pod Apr 15 23:39:35.440: INFO: Waiting for pod pod-projected-configmaps-17929312-6b66-42b8-a75d-2079e5e74c95 to disappear Apr 15 23:39:35.464: INFO: Pod pod-projected-configmaps-17929312-6b66-42b8-a75d-2079e5e74c95 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:39:35.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7366" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":141,"failed":0} SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:39:35.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:39:35.559: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:39:39.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5968" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":144,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:39:39.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:39:39.842: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"06f43e69-f746-4c47-a18f-7d988838cc5e", Controller:(*bool)(0xc0024aa48a), BlockOwnerDeletion:(*bool)(0xc0024aa48b)}} Apr 15 23:39:39.866: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"abf1c6da-89be-476a-ad66-a35a905df161", Controller:(*bool)(0xc00270c59a), BlockOwnerDeletion:(*bool)(0xc00270c59b)}} Apr 15 23:39:39.895: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"921d3660-6d94-4b5d-b939-9fc41c02b815", Controller:(*bool)(0xc0024aa662), BlockOwnerDeletion:(*bool)(0xc0024aa663)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:39:44.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1503" for this suite. • [SLOW TEST:5.187 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":8,"skipped":157,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:39:44.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 15 23:39:44.994: INFO: Waiting up to 5m0s for pod "pod-512ffa8e-525f-4ce3-955c-0bd27b5268f2" in namespace "emptydir-6656" to be "Succeeded or Failed" Apr 15 23:39:45.016: INFO: Pod "pod-512ffa8e-525f-4ce3-955c-0bd27b5268f2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.045154ms Apr 15 23:39:47.021: INFO: Pod "pod-512ffa8e-525f-4ce3-955c-0bd27b5268f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027262791s Apr 15 23:39:49.025: INFO: Pod "pod-512ffa8e-525f-4ce3-955c-0bd27b5268f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031013887s STEP: Saw pod success Apr 15 23:39:49.025: INFO: Pod "pod-512ffa8e-525f-4ce3-955c-0bd27b5268f2" satisfied condition "Succeeded or Failed" Apr 15 23:39:49.028: INFO: Trying to get logs from node latest-worker pod pod-512ffa8e-525f-4ce3-955c-0bd27b5268f2 container test-container: STEP: delete the pod Apr 15 23:39:49.094: INFO: Waiting for pod pod-512ffa8e-525f-4ce3-955c-0bd27b5268f2 to disappear Apr 15 23:39:49.208: INFO: Pod pod-512ffa8e-525f-4ce3-955c-0bd27b5268f2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:39:49.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6656" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":177,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:39:49.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:39:54.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9317" for this suite. • [SLOW TEST:5.076 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":10,"skipped":186,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:39:54.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 15 23:39:54.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7878530a-bc8e-49fd-8946-58a71b1bf245" in namespace "downward-api-9129" to be "Succeeded or Failed" Apr 15 23:39:54.369: INFO: Pod "downwardapi-volume-7878530a-bc8e-49fd-8946-58a71b1bf245": Phase="Pending", Reason="", readiness=false. Elapsed: 26.744943ms Apr 15 23:39:56.373: INFO: Pod "downwardapi-volume-7878530a-bc8e-49fd-8946-58a71b1bf245": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030579974s Apr 15 23:39:58.377: INFO: Pod "downwardapi-volume-7878530a-bc8e-49fd-8946-58a71b1bf245": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034530148s STEP: Saw pod success Apr 15 23:39:58.377: INFO: Pod "downwardapi-volume-7878530a-bc8e-49fd-8946-58a71b1bf245" satisfied condition "Succeeded or Failed" Apr 15 23:39:58.380: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7878530a-bc8e-49fd-8946-58a71b1bf245 container client-container: STEP: delete the pod Apr 15 23:39:58.402: INFO: Waiting for pod downwardapi-volume-7878530a-bc8e-49fd-8946-58a71b1bf245 to disappear Apr 15 23:39:58.423: INFO: Pod downwardapi-volume-7878530a-bc8e-49fd-8946-58a71b1bf245 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:39:58.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9129" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":187,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:39:58.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:40:09.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-295" for this suite. • [SLOW TEST:11.161 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":12,"skipped":205,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:40:09.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 15 23:40:13.709: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:40:13.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-33" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:40:13.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 15 23:40:14.572: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 15 23:40:16.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590814, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590814, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590814, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590814, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 15 23:40:19.609: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 15 23:40:19.630: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:40:19.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2794" for this suite. STEP: Destroying namespace "webhook-2794-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.978 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":14,"skipped":251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:40:19.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-a9263e32-cb4b-4e22-bf32-ea7c5cb4a800 STEP: Creating a pod to test consume configMaps Apr 15 23:40:19.809: INFO: Waiting up to 5m0s for pod "pod-configmaps-a737b8d3-cb15-4790-905a-6c383d09cfb5" in namespace "configmap-2520" to be "Succeeded or Failed" Apr 15 23:40:19.827: INFO: Pod "pod-configmaps-a737b8d3-cb15-4790-905a-6c383d09cfb5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.819982ms Apr 15 23:40:21.830: INFO: Pod "pod-configmaps-a737b8d3-cb15-4790-905a-6c383d09cfb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021385618s Apr 15 23:40:23.835: INFO: Pod "pod-configmaps-a737b8d3-cb15-4790-905a-6c383d09cfb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025578123s STEP: Saw pod success Apr 15 23:40:23.835: INFO: Pod "pod-configmaps-a737b8d3-cb15-4790-905a-6c383d09cfb5" satisfied condition "Succeeded or Failed" Apr 15 23:40:23.838: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a737b8d3-cb15-4790-905a-6c383d09cfb5 container configmap-volume-test: STEP: delete the pod Apr 15 23:40:23.863: INFO: Waiting for pod pod-configmaps-a737b8d3-cb15-4790-905a-6c383d09cfb5 to disappear Apr 15 23:40:23.874: INFO: Pod pod-configmaps-a737b8d3-cb15-4790-905a-6c383d09cfb5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:40:23.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2520" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":287,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:40:23.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 15 23:40:23.964: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 23:40:23.969: INFO: Number of nodes with available pods: 0 Apr 15 23:40:23.969: INFO: Node latest-worker is running more than one daemon pod Apr 15 23:40:25.003: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 23:40:25.006: INFO: Number of nodes with available pods: 0 Apr 15 23:40:25.006: INFO: Node latest-worker is running more than one daemon pod Apr 15 23:40:26.060: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 23:40:26.071: INFO: Number of nodes with available pods: 0 Apr 15 23:40:26.071: INFO: Node latest-worker is running more than one daemon pod Apr 15 23:40:26.974: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 23:40:26.978: INFO: Number of nodes with available pods: 0 Apr 15 23:40:26.978: INFO: Node latest-worker is running more than one daemon pod Apr 15 23:40:28.390: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 23:40:28.412: INFO: Number of nodes with available pods: 0 Apr 15 23:40:28.412: INFO: Node latest-worker is running more than one daemon pod Apr 15 23:40:28.988: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 23:40:28.991: INFO: Number of nodes with available pods: 1 Apr 15 23:40:28.991: INFO: Node latest-worker is running more than one daemon pod Apr 15 23:40:29.978: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 23:40:29.981: INFO: Number of nodes with available pods: 2 Apr 15 23:40:29.981: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 15 23:40:29.993: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 23:40:29.999: INFO: Number of nodes with available pods: 1 Apr 15 23:40:29.999: INFO: Node latest-worker2 is running more than one daemon pod Apr 15 23:40:31.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 23:40:31.007: INFO: Number of nodes with available pods: 1 Apr 15 23:40:31.007: INFO: Node latest-worker2 is running more than one daemon pod Apr 15 23:40:32.005: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 23:40:32.009: INFO: Number of nodes with available pods: 1 Apr 15 23:40:32.009: INFO: Node latest-worker2 is running more than one daemon pod Apr 15 23:40:33.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 15 23:40:33.007: INFO: Number of nodes with available pods: 2 Apr 15 23:40:33.007: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2331, will wait for the garbage collector to delete the pods Apr 15 23:40:33.071: INFO: Deleting DaemonSet.extensions daemon-set took: 6.255436ms Apr 15 23:40:33.372: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.231882ms Apr 15 23:40:43.075: INFO: Number of nodes with available pods: 0 Apr 15 23:40:43.075: INFO: Number of running nodes: 0, number of available pods: 0 Apr 15 23:40:43.082: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2331/daemonsets","resourceVersion":"8390790"},"items":null} Apr 15 23:40:43.086: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2331/pods","resourceVersion":"8390790"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:40:43.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2331" for this suite. • [SLOW TEST:19.223 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":16,"skipped":303,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:40:43.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:40:43.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 15 23:40:43.323: INFO: stderr: "" Apr 15 23:40:43.323: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:40:43.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4801" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":17,"skipped":312,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:40:43.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1965 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 15 23:40:43.391: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 15 23:40:43.431: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 15 23:40:45.434: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 15 23:40:47.435: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:40:49.435: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:40:51.435: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:40:53.436: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:40:55.435: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:40:57.461: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:40:59.435: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:41:01.437: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 15 23:41:01.441: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 15 23:41:05.543: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.164 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1965 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 23:41:05.543: INFO: >>> kubeConfig: /root/.kube/config I0415 23:41:05.571734 7 log.go:172] (0xc002cd4790) (0xc001820dc0) Create stream I0415 23:41:05.571766 7 log.go:172] (0xc002cd4790) (0xc001820dc0) Stream added, broadcasting: 1 I0415 23:41:05.573663 7 log.go:172] (0xc002cd4790) Reply frame received for 1 I0415 23:41:05.573739 7 log.go:172] (0xc002cd4790) (0xc001706140) Create stream I0415 23:41:05.573763 7 log.go:172] (0xc002cd4790) (0xc001706140) Stream added, broadcasting: 3 I0415 23:41:05.574627 7 log.go:172] (0xc002cd4790) Reply frame received for 3 I0415 23:41:05.574668 7 log.go:172] (0xc002cd4790) (0xc001820e60) Create stream I0415 23:41:05.574686 7 log.go:172] (0xc002cd4790) (0xc001820e60) Stream added, broadcasting: 5 I0415 23:41:05.575545 7 log.go:172] (0xc002cd4790) Reply frame received for 5 I0415 23:41:06.643509 7 log.go:172] (0xc002cd4790) Data frame received for 3 I0415 23:41:06.643558 7 log.go:172] (0xc001706140) (3) Data frame handling I0415 23:41:06.643588 7 log.go:172] (0xc001706140) (3) Data frame sent I0415 23:41:06.643622 7 log.go:172] (0xc002cd4790) Data frame received for 3 I0415 23:41:06.643647 7 log.go:172] (0xc001706140) (3) Data frame handling I0415 23:41:06.643717 7 log.go:172] (0xc002cd4790) Data frame received for 5 I0415 23:41:06.643752 7 log.go:172] (0xc001820e60) (5) Data frame handling I0415 23:41:06.646094 7 log.go:172] (0xc002cd4790) Data frame received for 1 I0415 23:41:06.646131 7 log.go:172] (0xc001820dc0) (1) Data frame handling I0415 23:41:06.646169 7 log.go:172] (0xc001820dc0) (1) Data frame sent I0415 23:41:06.646195 7 log.go:172] (0xc002cd4790) (0xc001820dc0) Stream removed, broadcasting: 1 I0415 23:41:06.646222 7 log.go:172] (0xc002cd4790) Go away received I0415 23:41:06.646774 7 log.go:172] (0xc002cd4790) (0xc001820dc0) Stream removed, broadcasting: 1 I0415 23:41:06.646806 7 log.go:172] (0xc002cd4790) (0xc001706140) Stream removed, broadcasting: 3 I0415 23:41:06.646822 7 log.go:172] (0xc002cd4790) (0xc001820e60) Stream removed, broadcasting: 5 Apr 15 23:41:06.646: INFO: Found all expected endpoints: [netserver-0] Apr 15 23:41:06.650: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.156 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1965 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 23:41:06.650: INFO: >>> kubeConfig: /root/.kube/config I0415 23:41:06.685906 7 log.go:172] (0xc002ce4370) (0xc001706820) Create stream I0415 23:41:06.685930 7 log.go:172] (0xc002ce4370) (0xc001706820) Stream added, broadcasting: 1 I0415 23:41:06.687619 7 log.go:172] (0xc002ce4370) Reply frame received for 1 I0415 23:41:06.687675 7 log.go:172] (0xc002ce4370) (0xc0017068c0) Create stream I0415 23:41:06.687701 7 log.go:172] (0xc002ce4370) (0xc0017068c0) Stream added, broadcasting: 3 I0415 23:41:06.688763 7 log.go:172] (0xc002ce4370) Reply frame received for 3 I0415 23:41:06.688819 7 log.go:172] (0xc002ce4370) (0xc0019c20a0) Create stream I0415 23:41:06.688836 7 log.go:172] (0xc002ce4370) (0xc0019c20a0) Stream added, broadcasting: 5 I0415 23:41:06.689971 7 log.go:172] (0xc002ce4370) Reply frame received for 5 I0415 23:41:07.781270 7 log.go:172] (0xc002ce4370) Data frame received for 3 I0415 23:41:07.781326 7 log.go:172] (0xc0017068c0) (3) Data frame handling I0415 23:41:07.781380 7 log.go:172] (0xc0017068c0) (3) Data frame sent I0415 23:41:07.781409 7 log.go:172] (0xc002ce4370) Data frame received for 3 I0415 23:41:07.781421 7 log.go:172] (0xc0017068c0) (3) Data frame handling I0415 23:41:07.781463 7 log.go:172] (0xc002ce4370) Data frame received for 5 I0415 23:41:07.781487 7 log.go:172] (0xc0019c20a0) (5) Data frame handling I0415 23:41:07.783057 7 log.go:172] (0xc002ce4370) Data frame received for 1 I0415 23:41:07.783086 7 log.go:172] (0xc001706820) (1) Data frame handling I0415 23:41:07.783121 7 log.go:172] (0xc001706820) (1) Data frame sent I0415 23:41:07.783159 7 log.go:172] (0xc002ce4370) (0xc001706820) Stream removed, broadcasting: 1 I0415 23:41:07.783195 7 log.go:172] (0xc002ce4370) Go away received I0415 23:41:07.783280 7 log.go:172] (0xc002ce4370) (0xc001706820) Stream removed, broadcasting: 1 I0415 23:41:07.783298 7 log.go:172] (0xc002ce4370) (0xc0017068c0) Stream removed, broadcasting: 3 I0415 23:41:07.783310 7 log.go:172] (0xc002ce4370) (0xc0019c20a0) Stream removed, broadcasting: 5 Apr 15 23:41:07.783: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:41:07.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1965" for this suite. • [SLOW TEST:24.453 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":323,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:41:07.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-63c324b6-e06e-46f0-8806-f2cda5b0066e STEP: Creating a pod to test consume secrets Apr 15 23:41:07.937: INFO: Waiting up to 5m0s for pod "pod-secrets-b4c36604-9862-4c70-be53-ee6b1759247b" in namespace "secrets-112" to be "Succeeded or Failed" Apr 15 23:41:07.953: INFO: Pod "pod-secrets-b4c36604-9862-4c70-be53-ee6b1759247b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.765034ms Apr 15 23:41:09.958: INFO: Pod "pod-secrets-b4c36604-9862-4c70-be53-ee6b1759247b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021086708s Apr 15 23:41:11.962: INFO: Pod "pod-secrets-b4c36604-9862-4c70-be53-ee6b1759247b": Phase="Running", Reason="", readiness=true. Elapsed: 4.025391043s Apr 15 23:41:13.987: INFO: Pod "pod-secrets-b4c36604-9862-4c70-be53-ee6b1759247b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050789779s STEP: Saw pod success Apr 15 23:41:13.987: INFO: Pod "pod-secrets-b4c36604-9862-4c70-be53-ee6b1759247b" satisfied condition "Succeeded or Failed" Apr 15 23:41:13.990: INFO: Trying to get logs from node latest-worker pod pod-secrets-b4c36604-9862-4c70-be53-ee6b1759247b container secret-volume-test: STEP: delete the pod Apr 15 23:41:14.024: INFO: Waiting for pod pod-secrets-b4c36604-9862-4c70-be53-ee6b1759247b to disappear Apr 15 23:41:14.155: INFO: Pod pod-secrets-b4c36604-9862-4c70-be53-ee6b1759247b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:41:14.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-112" for this suite. • [SLOW TEST:6.371 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":326,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:41:14.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:41:14.305: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:41:14.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2031" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":20,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:41:14.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 15 23:41:15.306: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 15 23:41:17.323: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590875, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590875, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590875, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590875, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 15 23:41:20.390: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:41:20.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3365-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:41:21.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3009" for this suite. STEP: Destroying namespace "webhook-3009-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.701 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":21,"skipped":345,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:41:21.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 15 23:41:21.650: INFO: Waiting up to 5m0s for pod "pod-b0bc8759-ad40-4750-8e04-b48b3350930b" in namespace "emptydir-4014" to be "Succeeded or Failed" Apr 15 23:41:21.653: INFO: Pod "pod-b0bc8759-ad40-4750-8e04-b48b3350930b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.829234ms Apr 15 23:41:23.657: INFO: Pod "pod-b0bc8759-ad40-4750-8e04-b48b3350930b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00725762s Apr 15 23:41:25.661: INFO: Pod "pod-b0bc8759-ad40-4750-8e04-b48b3350930b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011189815s STEP: Saw pod success Apr 15 23:41:25.661: INFO: Pod "pod-b0bc8759-ad40-4750-8e04-b48b3350930b" satisfied condition "Succeeded or Failed" Apr 15 23:41:25.664: INFO: Trying to get logs from node latest-worker2 pod pod-b0bc8759-ad40-4750-8e04-b48b3350930b container test-container: STEP: delete the pod Apr 15 23:41:25.679: INFO: Waiting for pod pod-b0bc8759-ad40-4750-8e04-b48b3350930b to disappear Apr 15 23:41:25.684: INFO: Pod pod-b0bc8759-ad40-4750-8e04-b48b3350930b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:41:25.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4014" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:41:25.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 15 23:41:25.739: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 15 23:41:25.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8490' Apr 15 23:41:26.110: INFO: stderr: "" Apr 15 23:41:26.110: INFO: stdout: "service/agnhost-slave created\n" Apr 15 23:41:26.111: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 15 23:41:26.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8490' Apr 15 23:41:26.415: INFO: stderr: "" Apr 15 23:41:26.415: INFO: stdout: "service/agnhost-master created\n" Apr 15 23:41:26.415: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 15 23:41:26.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8490' Apr 15 23:41:26.695: INFO: stderr: "" Apr 15 23:41:26.695: INFO: stdout: "service/frontend created\n" Apr 15 23:41:26.695: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 15 23:41:26.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8490' Apr 15 23:41:26.924: INFO: stderr: "" Apr 15 23:41:26.924: INFO: stdout: "deployment.apps/frontend created\n" Apr 15 23:41:26.925: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 15 23:41:26.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8490' Apr 15 23:41:27.264: INFO: stderr: "" Apr 15 23:41:27.264: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 15 23:41:27.264: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 15 23:41:27.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8490' Apr 15 23:41:27.520: INFO: stderr: "" Apr 15 23:41:27.520: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 15 23:41:27.521: INFO: Waiting for all frontend pods to be Running. Apr 15 23:41:37.571: INFO: Waiting for frontend to serve content. Apr 15 23:41:37.583: INFO: Trying to add a new entry to the guestbook. Apr 15 23:41:37.592: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 15 23:41:37.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8490' Apr 15 23:41:37.788: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 23:41:37.788: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 15 23:41:37.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8490' Apr 15 23:41:37.919: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 23:41:37.919: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 15 23:41:37.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8490' Apr 15 23:41:38.053: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 23:41:38.053: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 15 23:41:38.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8490' Apr 15 23:41:38.159: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 23:41:38.159: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 15 23:41:38.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8490' Apr 15 23:41:38.269: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 23:41:38.269: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 15 23:41:38.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8490' Apr 15 23:41:38.391: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 15 23:41:38.391: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:41:38.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8490" for this suite. • [SLOW TEST:12.708 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":23,"skipped":377,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:41:38.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 15 23:41:38.550: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8295 /api/v1/namespaces/watch-8295/configmaps/e2e-watch-test-resource-version f4f601b5-b51f-424f-8ccb-00cd71026f4e 8391327 0 2020-04-15 23:41:38 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 15 23:41:38.550: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8295 /api/v1/namespaces/watch-8295/configmaps/e2e-watch-test-resource-version f4f601b5-b51f-424f-8ccb-00cd71026f4e 8391328 0 2020-04-15 23:41:38 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:41:38.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8295" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":24,"skipped":385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:41:38.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 15 23:41:39.410: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 15 23:41:41.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590899, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590899, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590899, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590899, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 15 23:41:43.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590899, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590899, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590899, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722590899, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 15 23:41:46.715: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:41:47.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5728" for this suite. STEP: Destroying namespace "webhook-5728-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.739 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":25,"skipped":424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:41:47.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-cjjk STEP: Creating a pod to test atomic-volume-subpath Apr 15 23:41:47.416: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cjjk" in namespace "subpath-630" to be "Succeeded or Failed" Apr 15 23:41:47.421: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.764627ms Apr 15 23:41:49.424: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008648432s Apr 15 23:41:51.429: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Running", Reason="", readiness=true. Elapsed: 4.013068047s Apr 15 23:41:53.433: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Running", Reason="", readiness=true. Elapsed: 6.017282653s Apr 15 23:41:55.437: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Running", Reason="", readiness=true. Elapsed: 8.021085006s Apr 15 23:41:57.441: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Running", Reason="", readiness=true. Elapsed: 10.025188525s Apr 15 23:41:59.445: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Running", Reason="", readiness=true. Elapsed: 12.029222279s Apr 15 23:42:01.450: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Running", Reason="", readiness=true. Elapsed: 14.033922326s Apr 15 23:42:03.454: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Running", Reason="", readiness=true. Elapsed: 16.03836404s Apr 15 23:42:05.458: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Running", Reason="", readiness=true. Elapsed: 18.042009749s Apr 15 23:42:07.462: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Running", Reason="", readiness=true. Elapsed: 20.045799007s Apr 15 23:42:09.466: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Running", Reason="", readiness=true. Elapsed: 22.050411607s Apr 15 23:42:11.471: INFO: Pod "pod-subpath-test-configmap-cjjk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055151163s STEP: Saw pod success Apr 15 23:42:11.471: INFO: Pod "pod-subpath-test-configmap-cjjk" satisfied condition "Succeeded or Failed" Apr 15 23:42:11.474: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-cjjk container test-container-subpath-configmap-cjjk: STEP: delete the pod Apr 15 23:42:11.488: INFO: Waiting for pod pod-subpath-test-configmap-cjjk to disappear Apr 15 23:42:11.539: INFO: Pod pod-subpath-test-configmap-cjjk no longer exists STEP: Deleting pod pod-subpath-test-configmap-cjjk Apr 15 23:42:11.539: INFO: Deleting pod "pod-subpath-test-configmap-cjjk" in namespace "subpath-630" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:42:11.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-630" for this suite. • [SLOW TEST:24.253 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":26,"skipped":471,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:42:11.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 15 23:42:11.588: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 15 23:42:11.609: INFO: Waiting for terminating namespaces to be deleted... Apr 15 23:42:11.612: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 15 23:42:11.617: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 15 23:42:11.617: INFO: Container kindnet-cni ready: true, restart count 0 Apr 15 23:42:11.617: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 15 23:42:11.617: INFO: Container kube-proxy ready: true, restart count 0 Apr 15 23:42:11.617: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 15 23:42:11.624: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 15 23:42:11.624: INFO: Container kindnet-cni ready: true, restart count 0 Apr 15 23:42:11.624: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 15 23:42:11.624: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-bf97a455-fa4c-45bf-b06c-5fc08a103d5a 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-bf97a455-fa4c-45bf-b06c-5fc08a103d5a off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-bf97a455-fa4c-45bf-b06c-5fc08a103d5a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:47:19.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3431" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.283 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":27,"skipped":478,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:47:19.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 15 23:47:27.941: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 15 23:47:27.987: INFO: Pod pod-with-prestop-http-hook still exists Apr 15 23:47:29.987: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 15 23:47:29.996: INFO: Pod pod-with-prestop-http-hook still exists Apr 15 23:47:31.987: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 15 23:47:31.992: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:47:32.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1289" for this suite. • [SLOW TEST:12.190 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":481,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:47:32.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:47:32.056: INFO: Creating deployment "webserver-deployment" Apr 15 23:47:32.074: INFO: Waiting for observed generation 1 Apr 15 23:47:34.110: INFO: Waiting for all required pods to come up Apr 15 23:47:34.115: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 15 23:47:46.127: INFO: Waiting for deployment "webserver-deployment" to complete Apr 15 23:47:46.134: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 15 23:47:46.139: INFO: Updating deployment webserver-deployment Apr 15 23:47:46.139: INFO: Waiting for observed generation 2 Apr 15 23:47:48.149: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 15 23:47:48.151: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 15 23:47:48.153: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 15 23:47:48.160: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 15 23:47:48.160: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 15 23:47:48.162: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 15 23:47:48.165: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 15 23:47:48.165: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 15 23:47:48.170: INFO: Updating deployment webserver-deployment Apr 15 23:47:48.170: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 15 23:47:48.196: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 15 23:47:48.213: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 15 23:47:48.481: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1009 /apis/apps/v1/namespaces/deployment-1009/deployments/webserver-deployment 3df211d3-3346-4a11-af47-7847fc7f7168 8392842 3 2020-04-15 23:47:32 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038708f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-15 23:47:46 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-15 23:47:48 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 15 23:47:48.566: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-1009 /apis/apps/v1/namespaces/deployment-1009/replicasets/webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 8392904 3 2020-04-15 23:47:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 3df211d3-3346-4a11-af47-7847fc7f7168 0xc002ea9c77 0xc002ea9c78}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ea9ce8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 15 23:47:48.566: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 15 23:47:48.566: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-1009 /apis/apps/v1/namespaces/deployment-1009/replicasets/webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 8392886 3 2020-04-15 23:47:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 3df211d3-3346-4a11-af47-7847fc7f7168 0xc002ea9bb7 0xc002ea9bb8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ea9c18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 15 23:47:48.672: INFO: Pod "webserver-deployment-595b5b9587-2574r" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2574r webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-2574r 4e94b16d-ea24-487b-a142-72ab23f46acc 8392885 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc0033c5460 0xc0033c5461}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.673: INFO: Pod "webserver-deployment-595b5b9587-48lw5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-48lw5 webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-48lw5 bce3aac9-a6ff-4614-85ab-edeb8bad1941 8392757 0 2020-04-15 23:47:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc0033c5587 0xc0033c5588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.166,StartTime:2020-04-15 23:47:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-15 23:47:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7d00249c92c99e7f7e9158603b9371cf7a2e57ed98fb51eb91ee96060ac8eeb7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.166,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.673: INFO: Pod "webserver-deployment-595b5b9587-4jnkh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4jnkh webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-4jnkh cc5b0a2c-bb6c-44c8-b444-3ae29ea5ccbe 8392734 0 2020-04-15 23:47:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc0033c5727 0xc0033c5728}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.164,StartTime:2020-04-15 23:47:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-15 23:47:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://61d781ed08e8a02f54402c032dff47c5fdc65d8844a4929271eeaa7f96add9fa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.673: INFO: Pod "webserver-deployment-595b5b9587-d8kjb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d8kjb webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-d8kjb 56f01373-74ca-4f9d-bfdc-08613b467e96 8392695 0 2020-04-15 23:47:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc0033c5aa7 0xc0033c5aa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.162,StartTime:2020-04-15 23:47:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-15 23:47:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://437ab6fbf95acea5c618b9cf7093e1ed04e34c278415fa2e33bdc0098a88b4c3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.162,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.674: INFO: Pod "webserver-deployment-595b5b9587-jpz4l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jpz4l webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-jpz4l e96b30e7-0c88-4fbf-995a-8f6d34f0b540 8392862 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc0033c5d17 0xc0033c5d18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.674: INFO: Pod "webserver-deployment-595b5b9587-jwb9c" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jwb9c webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-jwb9c 46e94ed7-6c22-4499-bc9c-4a65d1782f72 8392910 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7c0d7 0xc002d7c0d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-15 23:47:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.674: INFO: Pod "webserver-deployment-595b5b9587-lnbzc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lnbzc webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-lnbzc 0cdd7b84-cf53-4c68-93f3-baa0ea4bc139 8392746 0 2020-04-15 23:47:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7c247 0xc002d7c248}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.180,StartTime:2020-04-15 23:47:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-15 23:47:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b8fe6f10755c676cdf9a915237b4c23b2ec4f4067f6352ffb263765427f4d06b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.180,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.675: INFO: Pod "webserver-deployment-595b5b9587-n84qf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n84qf webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-n84qf ba7fbdc8-ccc6-4d1f-b1ee-9935004965ea 8392871 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7c3c7 0xc002d7c3c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.675: INFO: Pod "webserver-deployment-595b5b9587-ns5rs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ns5rs webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-ns5rs 38d046ab-b470-4d81-ab94-3c166b940b14 8392901 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7c4e7 0xc002d7c4e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-15 23:47:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.675: INFO: Pod "webserver-deployment-595b5b9587-qfwdc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qfwdc webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-qfwdc 75b0193c-62e7-4da7-9184-cee2a23b537a 8392887 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7c657 0xc002d7c658}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.675: INFO: Pod "webserver-deployment-595b5b9587-qxfnf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qxfnf webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-qxfnf 9d1d51c0-17bc-45f6-869c-3532671d7a82 8392883 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7c777 0xc002d7c778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.675: INFO: Pod "webserver-deployment-595b5b9587-s8kd4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s8kd4 webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-s8kd4 b0a709ca-c0d4-44a4-ab82-31a69d765aa0 8392860 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7c897 0xc002d7c898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.675: INFO: Pod "webserver-deployment-595b5b9587-sgtrt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sgtrt webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-sgtrt 0b24853a-812c-4c37-b0e0-ca819da8ace3 8392760 0 2020-04-15 23:47:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7c9b7 0xc002d7c9b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.165,StartTime:2020-04-15 23:47:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-15 23:47:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6fb9d471a7d0076d3acd615a16b0e9a8a35b08f8f7b91a787e2debfeae76ab1f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.165,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.675: INFO: Pod "webserver-deployment-595b5b9587-tgkl6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tgkl6 webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-tgkl6 6c7294fd-9613-43dd-897d-351db9dddd29 8392880 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7cb47 0xc002d7cb48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.675: INFO: Pod "webserver-deployment-595b5b9587-vb9ql" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vb9ql webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-vb9ql 5932159b-8d88-41da-be22-8d0f06dd9bfe 8392884 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7cc67 0xc002d7cc68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.676: INFO: Pod "webserver-deployment-595b5b9587-vcpjl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vcpjl webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-vcpjl 9c3a57af-a5ee-481d-8221-7a4145cad4f1 8392859 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7cd87 0xc002d7cd88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.676: INFO: Pod "webserver-deployment-595b5b9587-wk9qc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wk9qc webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-wk9qc 2b7bef2c-7f65-47a2-8afd-c806da0b2610 8392719 0 2020-04-15 23:47:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7cea7 0xc002d7cea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.163,StartTime:2020-04-15 23:47:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-15 23:47:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cb230e37cc409c39eecbd552f7c3acf843c07ce46ce1d58eeba3d45b640aaeaa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.163,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.676: INFO: Pod "webserver-deployment-595b5b9587-wnlqh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wnlqh webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-wnlqh df61f164-4bf7-4c43-bc99-b51bade5a3fa 8392725 0 2020-04-15 23:47:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7d027 0xc002d7d028}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.177,StartTime:2020-04-15 23:47:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-15 23:47:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9490cbac7c18437c5b87bd5c5cf05e7f059815aec4ac48fda5c99f1c6346b097,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.177,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.676: INFO: Pod "webserver-deployment-595b5b9587-wq25d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wq25d webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-wq25d 5740a0df-868b-4aa3-b957-3c5054820c7f 8392850 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7d1a7 0xc002d7d1a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.677: INFO: Pod "webserver-deployment-595b5b9587-wt5d4" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wt5d4 webserver-deployment-595b5b9587- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-595b5b9587-wt5d4 44c5bfb9-1923-4cdb-a060-37f0b8a10311 8392680 0 2020-04-15 23:47:32 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 46690a07-72c7-4d57-9566-5063fa0c1a1e 0xc002d7d2c7 0xc002d7d2c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.176,StartTime:2020-04-15 23:47:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-15 23:47:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f5e690260a937c5b0c5b2b5329e30506e420aa05fb4d0add8e188889289df4d7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.176,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.677: INFO: Pod "webserver-deployment-c7997dcc8-4wgxj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4wgxj webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-4wgxj 79591cd1-1caf-4ee2-929b-c9c7a55a8c1c 8392888 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002d7d447 0xc002d7d448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.677: INFO: Pod "webserver-deployment-c7997dcc8-6gmhs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6gmhs webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-6gmhs 220a9510-90c2-4f52-a9fe-ce8d7a54a8ae 8392804 0 2020-04-15 23:47:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002d7d577 0xc002d7d578}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-15 23:47:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.677: INFO: Pod "webserver-deployment-c7997dcc8-6mfn7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6mfn7 webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-6mfn7 d04fcd9e-6b17-4d93-a840-7b5daa6ad757 8392898 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002d7d6f7 0xc002d7d6f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.678: INFO: Pod "webserver-deployment-c7997dcc8-f92mk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f92mk webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-f92mk fa92315c-5f85-4574-8bab-b1ae864a31cc 8392882 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002d7d827 0xc002d7d828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.678: INFO: Pod "webserver-deployment-c7997dcc8-jh8rh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jh8rh webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-jh8rh 5152308f-ffd6-4d55-ac81-39ed4930a522 8392855 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002d7d957 0xc002d7d958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.678: INFO: Pod "webserver-deployment-c7997dcc8-jzfj7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jzfj7 webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-jzfj7 98f9e263-f4a2-4fa4-a9e0-88f3ab4d789b 8392814 0 2020-04-15 23:47:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002d7daa7 0xc002d7daa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-15 23:47:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.678: INFO: Pod "webserver-deployment-c7997dcc8-mlvm9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mlvm9 webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-mlvm9 6db75e6f-de04-4d8c-8fc2-374c572d0ce5 8392797 0 2020-04-15 23:47:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002d7dc27 0xc002d7dc28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-15 23:47:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.678: INFO: Pod "webserver-deployment-c7997dcc8-mx6zn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mx6zn webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-mx6zn 3791aef5-25ca-4e8c-b002-4ae12b688f71 8392889 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002d7dda7 0xc002d7dda8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.679: INFO: Pod "webserver-deployment-c7997dcc8-ngl7b" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ngl7b webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-ngl7b cb0b0b65-00c5-4a3e-8da5-b2ebc136f50f 8392820 0 2020-04-15 23:47:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002d7ded7 0xc002d7ded8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-15 23:47:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.679: INFO: Pod "webserver-deployment-c7997dcc8-npc89" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-npc89 webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-npc89 5ae3b87e-02bf-44a4-a7ea-4c9151a6329a 8392825 0 2020-04-15 23:47:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002f94067 0xc002f94068}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-15 23:47:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.679: INFO: Pod "webserver-deployment-c7997dcc8-p7gsv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p7gsv webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-p7gsv 822127e7-cb11-49e2-b5ce-dbb67915b44e 8392857 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002f941e7 0xc002f941e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.679: INFO: Pod "webserver-deployment-c7997dcc8-s9xdg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s9xdg webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-s9xdg 749310c2-0b10-436f-9541-364b7d559bb6 8392909 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002f94327 0xc002f94328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-15 23:47:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:47:48.680: INFO: Pod "webserver-deployment-c7997dcc8-x22w2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x22w2 webserver-deployment-c7997dcc8- deployment-1009 /api/v1/namespaces/deployment-1009/pods/webserver-deployment-c7997dcc8-x22w2 6f048664-c1f6-49bc-b919-6844002711e0 8392881 0 2020-04-15 23:47:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6c25bda2-abdd-428f-ba87-ff32cbbb6dde 0xc002f944b7 0xc002f944b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r8x8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r8x8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r8x8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-15 23:47:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:47:48.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1009" for this suite. • [SLOW TEST:16.773 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":29,"skipped":491,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:47:48.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 15 23:47:48.980: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1329 /api/v1/namespaces/watch-1329/configmaps/e2e-watch-test-configmap-a 40fa2147-927c-46c8-8454-2dd0948316d7 8392934 0 2020-04-15 23:47:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 15 23:47:48.980: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1329 /api/v1/namespaces/watch-1329/configmaps/e2e-watch-test-configmap-a 40fa2147-927c-46c8-8454-2dd0948316d7 8392934 0 2020-04-15 23:47:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 15 23:47:59.295: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1329 /api/v1/namespaces/watch-1329/configmaps/e2e-watch-test-configmap-a 40fa2147-927c-46c8-8454-2dd0948316d7 8393019 0 2020-04-15 23:47:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 15 23:47:59.295: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1329 /api/v1/namespaces/watch-1329/configmaps/e2e-watch-test-configmap-a 40fa2147-927c-46c8-8454-2dd0948316d7 8393019 0 2020-04-15 23:47:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 15 23:48:09.358: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1329 /api/v1/namespaces/watch-1329/configmaps/e2e-watch-test-configmap-a 40fa2147-927c-46c8-8454-2dd0948316d7 8393177 0 2020-04-15 23:47:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 15 23:48:09.359: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1329 /api/v1/namespaces/watch-1329/configmaps/e2e-watch-test-configmap-a 40fa2147-927c-46c8-8454-2dd0948316d7 8393177 0 2020-04-15 23:47:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 15 23:48:19.366: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1329 /api/v1/namespaces/watch-1329/configmaps/e2e-watch-test-configmap-a 40fa2147-927c-46c8-8454-2dd0948316d7 8393313 0 2020-04-15 23:47:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 15 23:48:19.366: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1329 /api/v1/namespaces/watch-1329/configmaps/e2e-watch-test-configmap-a 40fa2147-927c-46c8-8454-2dd0948316d7 8393313 0 2020-04-15 23:47:48 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 15 23:48:29.372: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1329 /api/v1/namespaces/watch-1329/configmaps/e2e-watch-test-configmap-b 25feca4e-ae05-4ba9-95b5-3cab0f257536 8393344 0 2020-04-15 23:48:29 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 15 23:48:29.372: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1329 /api/v1/namespaces/watch-1329/configmaps/e2e-watch-test-configmap-b 25feca4e-ae05-4ba9-95b5-3cab0f257536 8393344 0 2020-04-15 23:48:29 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 15 23:48:39.380: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1329 /api/v1/namespaces/watch-1329/configmaps/e2e-watch-test-configmap-b 25feca4e-ae05-4ba9-95b5-3cab0f257536 8393374 0 2020-04-15 23:48:29 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 15 23:48:39.380: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1329 /api/v1/namespaces/watch-1329/configmaps/e2e-watch-test-configmap-b 25feca4e-ae05-4ba9-95b5-3cab0f257536 8393374 0 2020-04-15 23:48:29 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:48:49.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1329" for this suite. • [SLOW TEST:60.594 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":30,"skipped":496,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:48:49.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 15 23:48:50.041: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 15 23:48:52.052: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591330, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591330, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591330, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591330, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 15 23:48:55.179: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:48:55.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:48:56.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1337" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.199 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":31,"skipped":504,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:48:56.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:48:56.701: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 15 23:48:59.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2783 create -f -' Apr 15 23:49:02.314: INFO: stderr: "" Apr 15 23:49:02.314: INFO: stdout: "e2e-test-crd-publish-openapi-9357-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 15 23:49:02.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2783 delete e2e-test-crd-publish-openapi-9357-crds test-cr' Apr 15 23:49:02.433: INFO: stderr: "" Apr 15 23:49:02.433: INFO: stdout: "e2e-test-crd-publish-openapi-9357-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 15 23:49:02.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2783 apply -f -' Apr 15 23:49:02.707: INFO: stderr: "" Apr 15 23:49:02.707: INFO: stdout: "e2e-test-crd-publish-openapi-9357-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 15 23:49:02.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2783 delete e2e-test-crd-publish-openapi-9357-crds test-cr' Apr 15 23:49:02.799: INFO: stderr: "" Apr 15 23:49:02.799: INFO: stdout: "e2e-test-crd-publish-openapi-9357-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 15 23:49:02.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9357-crds' Apr 15 23:49:03.017: INFO: stderr: "" Apr 15 23:49:03.017: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9357-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:49:05.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2783" for this suite. • [SLOW TEST:9.360 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":32,"skipped":504,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:49:05.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 15 23:49:07.161: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 15 23:49:09.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591347, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591347, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591347, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591347, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 15 23:49:12.390: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:49:12.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-505" for this suite. STEP: Destroying namespace "webhook-505-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.573 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":33,"skipped":509,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:49:12.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 15 23:49:12.619: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb462b2b-65e1-41c4-a0ae-67515f64447c" in namespace "downward-api-4065" to be "Succeeded or Failed" Apr 15 23:49:12.688: INFO: Pod "downwardapi-volume-eb462b2b-65e1-41c4-a0ae-67515f64447c": Phase="Pending", Reason="", readiness=false. Elapsed: 68.792514ms Apr 15 23:49:14.706: INFO: Pod "downwardapi-volume-eb462b2b-65e1-41c4-a0ae-67515f64447c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087076283s Apr 15 23:49:16.711: INFO: Pod "downwardapi-volume-eb462b2b-65e1-41c4-a0ae-67515f64447c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091539341s STEP: Saw pod success Apr 15 23:49:16.711: INFO: Pod "downwardapi-volume-eb462b2b-65e1-41c4-a0ae-67515f64447c" satisfied condition "Succeeded or Failed" Apr 15 23:49:16.714: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-eb462b2b-65e1-41c4-a0ae-67515f64447c container client-container: STEP: delete the pod Apr 15 23:49:16.749: INFO: Waiting for pod downwardapi-volume-eb462b2b-65e1-41c4-a0ae-67515f64447c to disappear Apr 15 23:49:16.753: INFO: Pod downwardapi-volume-eb462b2b-65e1-41c4-a0ae-67515f64447c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:49:16.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4065" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":512,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:49:16.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-d707e1dd-5e12-4772-8169-da0878052c3f STEP: Creating a pod to test consume configMaps Apr 15 23:49:16.851: INFO: Waiting up to 5m0s for pod "pod-configmaps-646f4d03-ef8c-493b-b3d6-8485fb8bfdde" in namespace "configmap-2777" to be "Succeeded or Failed" Apr 15 23:49:16.874: INFO: Pod "pod-configmaps-646f4d03-ef8c-493b-b3d6-8485fb8bfdde": Phase="Pending", Reason="", readiness=false. Elapsed: 22.694351ms Apr 15 23:49:18.879: INFO: Pod "pod-configmaps-646f4d03-ef8c-493b-b3d6-8485fb8bfdde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027364472s Apr 15 23:49:20.882: INFO: Pod "pod-configmaps-646f4d03-ef8c-493b-b3d6-8485fb8bfdde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030907129s STEP: Saw pod success Apr 15 23:49:20.882: INFO: Pod "pod-configmaps-646f4d03-ef8c-493b-b3d6-8485fb8bfdde" satisfied condition "Succeeded or Failed" Apr 15 23:49:20.885: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-646f4d03-ef8c-493b-b3d6-8485fb8bfdde container configmap-volume-test: STEP: delete the pod Apr 15 23:49:20.966: INFO: Waiting for pod pod-configmaps-646f4d03-ef8c-493b-b3d6-8485fb8bfdde to disappear Apr 15 23:49:20.969: INFO: Pod pod-configmaps-646f4d03-ef8c-493b-b3d6-8485fb8bfdde no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:49:20.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2777" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":522,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:49:20.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:49:21.016: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:49:22.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-938" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":36,"skipped":552,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:49:22.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0415 23:49:23.411084 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 15 23:49:23.411: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:49:23.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4497" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":37,"skipped":559,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:49:23.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:49:23.543: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 15 23:49:26.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7849 create -f -' Apr 15 23:49:30.330: INFO: stderr: "" Apr 15 23:49:30.330: INFO: stdout: "e2e-test-crd-publish-openapi-408-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 15 23:49:30.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7849 delete e2e-test-crd-publish-openapi-408-crds test-cr' Apr 15 23:49:30.448: INFO: stderr: "" Apr 15 23:49:30.448: INFO: stdout: "e2e-test-crd-publish-openapi-408-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 15 23:49:30.448: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7849 apply -f -' Apr 15 23:49:30.726: INFO: stderr: "" Apr 15 23:49:30.726: INFO: stdout: "e2e-test-crd-publish-openapi-408-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 15 23:49:30.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7849 delete e2e-test-crd-publish-openapi-408-crds test-cr' Apr 15 23:49:30.825: INFO: stderr: "" Apr 15 23:49:30.825: INFO: stdout: "e2e-test-crd-publish-openapi-408-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 15 23:49:30.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-408-crds' Apr 15 23:49:31.067: INFO: stderr: "" Apr 15 23:49:31.067: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-408-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:49:32.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7849" for this suite. • [SLOW TEST:9.567 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":38,"skipped":559,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:49:32.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 15 23:49:33.425: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 15 23:49:35.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591373, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591373, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591373, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591373, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 15 23:49:38.478: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:49:38.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4325-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:49:39.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6180" for this suite. STEP: Destroying namespace "webhook-6180-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.808 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":39,"skipped":565,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:49:39.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-f6ba144a-90d7-4cbb-b32a-72f66f2a1c3e STEP: Creating a pod to test consume secrets Apr 15 23:49:39.873: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6771c9b6-d4a2-4696-a66b-c307802366e2" in namespace "projected-7564" to be "Succeeded or Failed" Apr 15 23:49:39.894: INFO: Pod "pod-projected-secrets-6771c9b6-d4a2-4696-a66b-c307802366e2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.835014ms Apr 15 23:49:41.898: INFO: Pod "pod-projected-secrets-6771c9b6-d4a2-4696-a66b-c307802366e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024794884s Apr 15 23:49:43.902: INFO: Pod "pod-projected-secrets-6771c9b6-d4a2-4696-a66b-c307802366e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028931067s STEP: Saw pod success Apr 15 23:49:43.902: INFO: Pod "pod-projected-secrets-6771c9b6-d4a2-4696-a66b-c307802366e2" satisfied condition "Succeeded or Failed" Apr 15 23:49:43.905: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-6771c9b6-d4a2-4696-a66b-c307802366e2 container projected-secret-volume-test: STEP: delete the pod Apr 15 23:49:43.939: INFO: Waiting for pod pod-projected-secrets-6771c9b6-d4a2-4696-a66b-c307802366e2 to disappear Apr 15 23:49:43.946: INFO: Pod pod-projected-secrets-6771c9b6-d4a2-4696-a66b-c307802366e2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:49:43.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7564" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":568,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:49:43.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 15 23:49:44.026: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f50d9c8f-e7ba-4bf7-a7f7-b0d7ecc4814f" in namespace "projected-4606" to be "Succeeded or Failed" Apr 15 23:49:44.030: INFO: Pod "downwardapi-volume-f50d9c8f-e7ba-4bf7-a7f7-b0d7ecc4814f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.425484ms Apr 15 23:49:46.108: INFO: Pod "downwardapi-volume-f50d9c8f-e7ba-4bf7-a7f7-b0d7ecc4814f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082269287s Apr 15 23:49:48.115: INFO: Pod "downwardapi-volume-f50d9c8f-e7ba-4bf7-a7f7-b0d7ecc4814f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088699066s STEP: Saw pod success Apr 15 23:49:48.115: INFO: Pod "downwardapi-volume-f50d9c8f-e7ba-4bf7-a7f7-b0d7ecc4814f" satisfied condition "Succeeded or Failed" Apr 15 23:49:48.117: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f50d9c8f-e7ba-4bf7-a7f7-b0d7ecc4814f container client-container: STEP: delete the pod Apr 15 23:49:48.135: INFO: Waiting for pod downwardapi-volume-f50d9c8f-e7ba-4bf7-a7f7-b0d7ecc4814f to disappear Apr 15 23:49:48.294: INFO: Pod downwardapi-volume-f50d9c8f-e7ba-4bf7-a7f7-b0d7ecc4814f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:49:48.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4606" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":568,"failed":0} SSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:49:48.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:50:02.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3731" for this suite. • [SLOW TEST:14.064 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":42,"skipped":572,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:50:02.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 15 23:50:02.569: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e55409b5-6b59-4fd6-b36a-01d0e9f69ae7" in namespace "projected-673" to be "Succeeded or Failed" Apr 15 23:50:02.582: INFO: Pod "downwardapi-volume-e55409b5-6b59-4fd6-b36a-01d0e9f69ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.114201ms Apr 15 23:50:04.585: INFO: Pod "downwardapi-volume-e55409b5-6b59-4fd6-b36a-01d0e9f69ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016742521s Apr 15 23:50:06.590: INFO: Pod "downwardapi-volume-e55409b5-6b59-4fd6-b36a-01d0e9f69ae7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021090823s STEP: Saw pod success Apr 15 23:50:06.590: INFO: Pod "downwardapi-volume-e55409b5-6b59-4fd6-b36a-01d0e9f69ae7" satisfied condition "Succeeded or Failed" Apr 15 23:50:06.594: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e55409b5-6b59-4fd6-b36a-01d0e9f69ae7 container client-container: STEP: delete the pod Apr 15 23:50:06.612: INFO: Waiting for pod downwardapi-volume-e55409b5-6b59-4fd6-b36a-01d0e9f69ae7 to disappear Apr 15 23:50:06.617: INFO: Pod downwardapi-volume-e55409b5-6b59-4fd6-b36a-01d0e9f69ae7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:50:06.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-673" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:50:06.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:50:06.702: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 5.797934ms) Apr 15 23:50:06.706: INFO: (1) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 4.010803ms) Apr 15 23:50:06.709: INFO: (2) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.161908ms) Apr 15 23:50:06.713: INFO: (3) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.618871ms) Apr 15 23:50:06.717: INFO: (4) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 4.294259ms) Apr 15 23:50:06.720: INFO: (5) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.149617ms) Apr 15 23:50:06.724: INFO: (6) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.480183ms) Apr 15 23:50:06.728: INFO: (7) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.634825ms) Apr 15 23:50:06.731: INFO: (8) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.62222ms) Apr 15 23:50:06.734: INFO: (9) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.134729ms) Apr 15 23:50:06.737: INFO: (10) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.660229ms) Apr 15 23:50:06.740: INFO: (11) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.728143ms) Apr 15 23:50:06.743: INFO: (12) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.779455ms) Apr 15 23:50:06.746: INFO: (13) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.817219ms) Apr 15 23:50:06.749: INFO: (14) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.419373ms) Apr 15 23:50:06.752: INFO: (15) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.744761ms) Apr 15 23:50:06.755: INFO: (16) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.834543ms) Apr 15 23:50:06.757: INFO: (17) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.727163ms) Apr 15 23:50:06.760: INFO: (18) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.67242ms) Apr 15 23:50:06.763: INFO: (19) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.471218ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:50:06.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9108" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":44,"skipped":618,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:50:06.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 15 23:50:06.840: INFO: Waiting up to 5m0s for pod "downward-api-4ec720a0-8bca-47ac-9c82-4c905440fa7a" in namespace "downward-api-1997" to be "Succeeded or Failed" Apr 15 23:50:06.860: INFO: Pod "downward-api-4ec720a0-8bca-47ac-9c82-4c905440fa7a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.215149ms Apr 15 23:50:08.928: INFO: Pod "downward-api-4ec720a0-8bca-47ac-9c82-4c905440fa7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088046176s Apr 15 23:50:10.932: INFO: Pod "downward-api-4ec720a0-8bca-47ac-9c82-4c905440fa7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092028763s STEP: Saw pod success Apr 15 23:50:10.932: INFO: Pod "downward-api-4ec720a0-8bca-47ac-9c82-4c905440fa7a" satisfied condition "Succeeded or Failed" Apr 15 23:50:10.936: INFO: Trying to get logs from node latest-worker pod downward-api-4ec720a0-8bca-47ac-9c82-4c905440fa7a container dapi-container: STEP: delete the pod Apr 15 23:50:10.956: INFO: Waiting for pod downward-api-4ec720a0-8bca-47ac-9c82-4c905440fa7a to disappear Apr 15 23:50:10.960: INFO: Pod downward-api-4ec720a0-8bca-47ac-9c82-4c905440fa7a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:50:10.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1997" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":640,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:50:10.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0415 23:50:51.474861 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 15 23:50:51.474: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:50:51.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9874" for this suite. • [SLOW TEST:40.516 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":46,"skipped":641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:50:51.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-c1133a65-9ad3-4b43-a755-38580a70633d STEP: Creating a pod to test consume secrets Apr 15 23:50:51.536: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dc976f88-bdbe-4610-933c-c50acceae875" in namespace "projected-271" to be "Succeeded or Failed" Apr 15 23:50:51.575: INFO: Pod "pod-projected-secrets-dc976f88-bdbe-4610-933c-c50acceae875": Phase="Pending", Reason="", readiness=false. Elapsed: 39.468035ms Apr 15 23:50:53.580: INFO: Pod "pod-projected-secrets-dc976f88-bdbe-4610-933c-c50acceae875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043515262s Apr 15 23:50:55.584: INFO: Pod "pod-projected-secrets-dc976f88-bdbe-4610-933c-c50acceae875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047532884s STEP: Saw pod success Apr 15 23:50:55.584: INFO: Pod "pod-projected-secrets-dc976f88-bdbe-4610-933c-c50acceae875" satisfied condition "Succeeded or Failed" Apr 15 23:50:55.586: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-dc976f88-bdbe-4610-933c-c50acceae875 container projected-secret-volume-test: STEP: delete the pod Apr 15 23:50:55.602: INFO: Waiting for pod pod-projected-secrets-dc976f88-bdbe-4610-933c-c50acceae875 to disappear Apr 15 23:50:55.606: INFO: Pod pod-projected-secrets-dc976f88-bdbe-4610-933c-c50acceae875 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:50:55.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-271" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":672,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:50:55.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 15 23:50:55.662: INFO: Waiting up to 5m0s for pod "downward-api-878cbc6c-a6c1-4a09-8f65-134a439773d5" in namespace "downward-api-4786" to be "Succeeded or Failed" Apr 15 23:50:55.677: INFO: Pod "downward-api-878cbc6c-a6c1-4a09-8f65-134a439773d5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.956924ms Apr 15 23:50:57.713: INFO: Pod "downward-api-878cbc6c-a6c1-4a09-8f65-134a439773d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051115684s Apr 15 23:50:59.717: INFO: Pod "downward-api-878cbc6c-a6c1-4a09-8f65-134a439773d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054537731s STEP: Saw pod success Apr 15 23:50:59.717: INFO: Pod "downward-api-878cbc6c-a6c1-4a09-8f65-134a439773d5" satisfied condition "Succeeded or Failed" Apr 15 23:50:59.720: INFO: Trying to get logs from node latest-worker pod downward-api-878cbc6c-a6c1-4a09-8f65-134a439773d5 container dapi-container: STEP: delete the pod Apr 15 23:50:59.875: INFO: Waiting for pod downward-api-878cbc6c-a6c1-4a09-8f65-134a439773d5 to disappear Apr 15 23:50:59.902: INFO: Pod downward-api-878cbc6c-a6c1-4a09-8f65-134a439773d5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:50:59.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4786" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":683,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:50:59.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 15 23:51:00.327: INFO: Waiting up to 5m0s for pod "client-containers-beaf5abf-efb6-46cc-bc42-0dff3fb5eac5" in namespace "containers-246" to be "Succeeded or Failed" Apr 15 23:51:00.387: INFO: Pod "client-containers-beaf5abf-efb6-46cc-bc42-0dff3fb5eac5": Phase="Pending", Reason="", readiness=false. Elapsed: 59.642262ms Apr 15 23:51:02.391: INFO: Pod "client-containers-beaf5abf-efb6-46cc-bc42-0dff3fb5eac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063739007s Apr 15 23:51:04.396: INFO: Pod "client-containers-beaf5abf-efb6-46cc-bc42-0dff3fb5eac5": Phase="Running", Reason="", readiness=true. Elapsed: 4.068518298s Apr 15 23:51:06.400: INFO: Pod "client-containers-beaf5abf-efb6-46cc-bc42-0dff3fb5eac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072411923s STEP: Saw pod success Apr 15 23:51:06.400: INFO: Pod "client-containers-beaf5abf-efb6-46cc-bc42-0dff3fb5eac5" satisfied condition "Succeeded or Failed" Apr 15 23:51:06.402: INFO: Trying to get logs from node latest-worker pod client-containers-beaf5abf-efb6-46cc-bc42-0dff3fb5eac5 container test-container: STEP: delete the pod Apr 15 23:51:06.430: INFO: Waiting for pod client-containers-beaf5abf-efb6-46cc-bc42-0dff3fb5eac5 to disappear Apr 15 23:51:06.456: INFO: Pod client-containers-beaf5abf-efb6-46cc-bc42-0dff3fb5eac5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:51:06.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-246" for this suite. • [SLOW TEST:6.553 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":688,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:51:06.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-8ab810fa-062d-4477-a888-3300311c25c8 STEP: Creating a pod to test consume configMaps Apr 15 23:51:06.531: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b52e78aa-4188-4175-abb7-248b05c32969" in namespace "projected-8493" to be "Succeeded or Failed" Apr 15 23:51:06.534: INFO: Pod "pod-projected-configmaps-b52e78aa-4188-4175-abb7-248b05c32969": Phase="Pending", Reason="", readiness=false. Elapsed: 3.159858ms Apr 15 23:51:08.881: INFO: Pod "pod-projected-configmaps-b52e78aa-4188-4175-abb7-248b05c32969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.349213382s Apr 15 23:51:10.885: INFO: Pod "pod-projected-configmaps-b52e78aa-4188-4175-abb7-248b05c32969": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.353943894s STEP: Saw pod success Apr 15 23:51:10.885: INFO: Pod "pod-projected-configmaps-b52e78aa-4188-4175-abb7-248b05c32969" satisfied condition "Succeeded or Failed" Apr 15 23:51:10.888: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-b52e78aa-4188-4175-abb7-248b05c32969 container projected-configmap-volume-test: STEP: delete the pod Apr 15 23:51:10.984: INFO: Waiting for pod pod-projected-configmaps-b52e78aa-4188-4175-abb7-248b05c32969 to disappear Apr 15 23:51:10.990: INFO: Pod pod-projected-configmaps-b52e78aa-4188-4175-abb7-248b05c32969 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:51:10.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8493" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":690,"failed":0} ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:51:10.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 15 23:51:15.318: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:51:15.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8705" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":690,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:51:15.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 15 23:51:15.443: INFO: Created pod &Pod{ObjectMeta:{dns-245 dns-245 /api/v1/namespaces/dns-245/pods/dns-245 1bf4639d-e56b-4063-8d6a-676a9711c861 8394702 0 2020-04-15 23:51:15 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrqwc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrqwc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrqwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 15 23:51:15.465: INFO: The status of Pod dns-245 is Pending, waiting for it to be Running (with Ready = true) Apr 15 23:51:17.468: INFO: The status of Pod dns-245 is Pending, waiting for it to be Running (with Ready = true) Apr 15 23:51:19.469: INFO: The status of Pod dns-245 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 15 23:51:19.469: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-245 PodName:dns-245 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 23:51:19.469: INFO: >>> kubeConfig: /root/.kube/config I0415 23:51:19.501745 7 log.go:172] (0xc002ce4e70) (0xc001ae8320) Create stream I0415 23:51:19.501776 7 log.go:172] (0xc002ce4e70) (0xc001ae8320) Stream added, broadcasting: 1 I0415 23:51:19.503954 7 log.go:172] (0xc002ce4e70) Reply frame received for 1 I0415 23:51:19.504018 7 log.go:172] (0xc002ce4e70) (0xc0011463c0) Create stream I0415 23:51:19.504061 7 log.go:172] (0xc002ce4e70) (0xc0011463c0) Stream added, broadcasting: 3 I0415 23:51:19.505431 7 log.go:172] (0xc002ce4e70) Reply frame received for 3 I0415 23:51:19.505461 7 log.go:172] (0xc002ce4e70) (0xc001ae83c0) Create stream I0415 23:51:19.505485 7 log.go:172] (0xc002ce4e70) (0xc001ae83c0) Stream added, broadcasting: 5 I0415 23:51:19.506508 7 log.go:172] (0xc002ce4e70) Reply frame received for 5 I0415 23:51:19.584094 7 log.go:172] (0xc002ce4e70) Data frame received for 3 I0415 23:51:19.584124 7 log.go:172] (0xc0011463c0) (3) Data frame handling I0415 23:51:19.584143 7 log.go:172] (0xc0011463c0) (3) Data frame sent I0415 23:51:19.585412 7 log.go:172] (0xc002ce4e70) Data frame received for 3 I0415 23:51:19.585443 7 log.go:172] (0xc0011463c0) (3) Data frame handling I0415 23:51:19.585825 7 log.go:172] (0xc002ce4e70) Data frame received for 5 I0415 23:51:19.585936 7 log.go:172] (0xc001ae83c0) (5) Data frame handling I0415 23:51:19.587532 7 log.go:172] (0xc002ce4e70) Data frame received for 1 I0415 23:51:19.587547 7 log.go:172] (0xc001ae8320) (1) Data frame handling I0415 23:51:19.587558 7 log.go:172] (0xc001ae8320) (1) Data frame sent I0415 23:51:19.587569 7 log.go:172] (0xc002ce4e70) (0xc001ae8320) Stream removed, broadcasting: 1 I0415 23:51:19.587740 7 log.go:172] (0xc002ce4e70) Go away received I0415 23:51:19.587827 7 log.go:172] (0xc002ce4e70) (0xc001ae8320) Stream removed, broadcasting: 1 I0415 23:51:19.587864 7 log.go:172] (0xc002ce4e70) (0xc0011463c0) Stream removed, broadcasting: 3 I0415 23:51:19.587889 7 log.go:172] (0xc002ce4e70) (0xc001ae83c0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 15 23:51:19.587: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-245 PodName:dns-245 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 23:51:19.587: INFO: >>> kubeConfig: /root/.kube/config I0415 23:51:19.620527 7 log.go:172] (0xc004146a50) (0xc00229b680) Create stream I0415 23:51:19.620550 7 log.go:172] (0xc004146a50) (0xc00229b680) Stream added, broadcasting: 1 I0415 23:51:19.623013 7 log.go:172] (0xc004146a50) Reply frame received for 1 I0415 23:51:19.623067 7 log.go:172] (0xc004146a50) (0xc001ae8640) Create stream I0415 23:51:19.623083 7 log.go:172] (0xc004146a50) (0xc001ae8640) Stream added, broadcasting: 3 I0415 23:51:19.624081 7 log.go:172] (0xc004146a50) Reply frame received for 3 I0415 23:51:19.624120 7 log.go:172] (0xc004146a50) (0xc00229b720) Create stream I0415 23:51:19.624135 7 log.go:172] (0xc004146a50) (0xc00229b720) Stream added, broadcasting: 5 I0415 23:51:19.625370 7 log.go:172] (0xc004146a50) Reply frame received for 5 I0415 23:51:19.684642 7 log.go:172] (0xc004146a50) Data frame received for 3 I0415 23:51:19.684672 7 log.go:172] (0xc001ae8640) (3) Data frame handling I0415 23:51:19.684695 7 log.go:172] (0xc001ae8640) (3) Data frame sent I0415 23:51:19.685804 7 log.go:172] (0xc004146a50) Data frame received for 3 I0415 23:51:19.685853 7 log.go:172] (0xc001ae8640) (3) Data frame handling I0415 23:51:19.685878 7 log.go:172] (0xc004146a50) Data frame received for 5 I0415 23:51:19.685914 7 log.go:172] (0xc00229b720) (5) Data frame handling I0415 23:51:19.687695 7 log.go:172] (0xc004146a50) Data frame received for 1 I0415 23:51:19.687717 7 log.go:172] (0xc00229b680) (1) Data frame handling I0415 23:51:19.687752 7 log.go:172] (0xc00229b680) (1) Data frame sent I0415 23:51:19.687941 7 log.go:172] (0xc004146a50) (0xc00229b680) Stream removed, broadcasting: 1 I0415 23:51:19.688049 7 log.go:172] (0xc004146a50) Go away received I0415 23:51:19.688189 7 log.go:172] (0xc004146a50) (0xc00229b680) Stream removed, broadcasting: 1 I0415 23:51:19.688225 7 log.go:172] (0xc004146a50) (0xc001ae8640) Stream removed, broadcasting: 3 I0415 23:51:19.688252 7 log.go:172] (0xc004146a50) (0xc00229b720) Stream removed, broadcasting: 5 Apr 15 23:51:19.688: INFO: Deleting pod dns-245... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:51:19.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-245" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":52,"skipped":703,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:51:19.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:51:19.806: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:51:26.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2700" for this suite. • [SLOW TEST:6.374 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":53,"skipped":763,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:51:26.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 15 23:51:26.165: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 15 23:51:26.185: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 15 23:51:26.185: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 15 23:51:26.216: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 15 23:51:26.216: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 15 23:51:26.224: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 15 23:51:26.224: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 15 23:51:33.482: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:51:33.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-6020" for this suite. • [SLOW TEST:7.427 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":54,"skipped":779,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:51:33.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:51:33.623: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-73b87fb7-9bca-4fc9-a1bf-4c9c16c36d65" in namespace "security-context-test-4228" to be "Succeeded or Failed" Apr 15 23:51:33.666: INFO: Pod "busybox-privileged-false-73b87fb7-9bca-4fc9-a1bf-4c9c16c36d65": Phase="Pending", Reason="", readiness=false. Elapsed: 42.650606ms Apr 15 23:51:35.900: INFO: Pod "busybox-privileged-false-73b87fb7-9bca-4fc9-a1bf-4c9c16c36d65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276720182s Apr 15 23:51:39.068: INFO: Pod "busybox-privileged-false-73b87fb7-9bca-4fc9-a1bf-4c9c16c36d65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.444916705s Apr 15 23:51:39.068: INFO: Pod "busybox-privileged-false-73b87fb7-9bca-4fc9-a1bf-4c9c16c36d65" satisfied condition "Succeeded or Failed" Apr 15 23:51:39.113: INFO: Got logs for pod "busybox-privileged-false-73b87fb7-9bca-4fc9-a1bf-4c9c16c36d65": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:51:39.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4228" for this suite. • [SLOW TEST:5.683 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:51:39.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 15 23:51:43.932: INFO: Successfully updated pod "pod-update-777114c6-1788-4817-9ce2-9bb49903b3d4" STEP: verifying the updated pod is in kubernetes Apr 15 23:51:43.940: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:51:43.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4114" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":828,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:51:43.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 15 23:51:48.540: INFO: Successfully updated pod "annotationupdate0342fa7c-6243-4512-a694-dba758c6053c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:51:50.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-644" for this suite. • [SLOW TEST:6.626 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":854,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:51:50.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:51:56.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3511" for this suite. STEP: Destroying namespace "nsdeletetest-5904" for this suite. Apr 15 23:51:56.832: INFO: Namespace nsdeletetest-5904 was already deleted STEP: Destroying namespace "nsdeletetest-1753" for this suite. • [SLOW TEST:6.261 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":58,"skipped":874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:51:56.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-540 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 15 23:51:56.895: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 15 23:51:56.927: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 15 23:51:58.931: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 15 23:52:00.990: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 15 23:52:02.931: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:52:04.932: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:52:06.931: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:52:08.931: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:52:10.931: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:52:12.931: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:52:14.931: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:52:16.932: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 15 23:52:18.932: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 15 23:52:18.937: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 15 23:52:23.006: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.218:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-540 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 23:52:23.006: INFO: >>> kubeConfig: /root/.kube/config I0415 23:52:23.036298 7 log.go:172] (0xc0027f76b0) (0xc001ae86e0) Create stream I0415 23:52:23.036326 7 log.go:172] (0xc0027f76b0) (0xc001ae86e0) Stream added, broadcasting: 1 I0415 23:52:23.039658 7 log.go:172] (0xc0027f76b0) Reply frame received for 1 I0415 23:52:23.039711 7 log.go:172] (0xc0027f76b0) (0xc001ae88c0) Create stream I0415 23:52:23.039731 7 log.go:172] (0xc0027f76b0) (0xc001ae88c0) Stream added, broadcasting: 3 I0415 23:52:23.040645 7 log.go:172] (0xc0027f76b0) Reply frame received for 3 I0415 23:52:23.040711 7 log.go:172] (0xc0027f76b0) (0xc001ee61e0) Create stream I0415 23:52:23.040739 7 log.go:172] (0xc0027f76b0) (0xc001ee61e0) Stream added, broadcasting: 5 I0415 23:52:23.041783 7 log.go:172] (0xc0027f76b0) Reply frame received for 5 I0415 23:52:23.124518 7 log.go:172] (0xc0027f76b0) Data frame received for 5 I0415 23:52:23.124576 7 log.go:172] (0xc001ee61e0) (5) Data frame handling I0415 23:52:23.124618 7 log.go:172] (0xc0027f76b0) Data frame received for 3 I0415 23:52:23.124643 7 log.go:172] (0xc001ae88c0) (3) Data frame handling I0415 23:52:23.124669 7 log.go:172] (0xc001ae88c0) (3) Data frame sent I0415 23:52:23.124685 7 log.go:172] (0xc0027f76b0) Data frame received for 3 I0415 23:52:23.124706 7 log.go:172] (0xc001ae88c0) (3) Data frame handling I0415 23:52:23.126174 7 log.go:172] (0xc0027f76b0) Data frame received for 1 I0415 23:52:23.126207 7 log.go:172] (0xc001ae86e0) (1) Data frame handling I0415 23:52:23.126227 7 log.go:172] (0xc001ae86e0) (1) Data frame sent I0415 23:52:23.126246 7 log.go:172] (0xc0027f76b0) (0xc001ae86e0) Stream removed, broadcasting: 1 I0415 23:52:23.126313 7 log.go:172] (0xc0027f76b0) Go away received I0415 23:52:23.126386 7 log.go:172] (0xc0027f76b0) (0xc001ae86e0) Stream removed, broadcasting: 1 I0415 23:52:23.126438 7 log.go:172] (0xc0027f76b0) (0xc001ae88c0) Stream removed, broadcasting: 3 I0415 23:52:23.126467 7 log.go:172] (0xc0027f76b0) (0xc001ee61e0) Stream removed, broadcasting: 5 Apr 15 23:52:23.126: INFO: Found all expected endpoints: [netserver-0] Apr 15 23:52:23.130: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.197:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-540 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 15 23:52:23.130: INFO: >>> kubeConfig: /root/.kube/config I0415 23:52:23.159540 7 log.go:172] (0xc002b360b0) (0xc001ee6320) Create stream I0415 23:52:23.159560 7 log.go:172] (0xc002b360b0) (0xc001ee6320) Stream added, broadcasting: 1 I0415 23:52:23.161741 7 log.go:172] (0xc002b360b0) Reply frame received for 1 I0415 23:52:23.161770 7 log.go:172] (0xc002b360b0) (0xc001dce000) Create stream I0415 23:52:23.161786 7 log.go:172] (0xc002b360b0) (0xc001dce000) Stream added, broadcasting: 3 I0415 23:52:23.162994 7 log.go:172] (0xc002b360b0) Reply frame received for 3 I0415 23:52:23.163054 7 log.go:172] (0xc002b360b0) (0xc001ee6460) Create stream I0415 23:52:23.163071 7 log.go:172] (0xc002b360b0) (0xc001ee6460) Stream added, broadcasting: 5 I0415 23:52:23.163852 7 log.go:172] (0xc002b360b0) Reply frame received for 5 I0415 23:52:23.238737 7 log.go:172] (0xc002b360b0) Data frame received for 3 I0415 23:52:23.238757 7 log.go:172] (0xc001dce000) (3) Data frame handling I0415 23:52:23.238769 7 log.go:172] (0xc001dce000) (3) Data frame sent I0415 23:52:23.238777 7 log.go:172] (0xc002b360b0) Data frame received for 3 I0415 23:52:23.238782 7 log.go:172] (0xc001dce000) (3) Data frame handling I0415 23:52:23.238974 7 log.go:172] (0xc002b360b0) Data frame received for 5 I0415 23:52:23.239003 7 log.go:172] (0xc001ee6460) (5) Data frame handling I0415 23:52:23.240513 7 log.go:172] (0xc002b360b0) Data frame received for 1 I0415 23:52:23.240538 7 log.go:172] (0xc001ee6320) (1) Data frame handling I0415 23:52:23.240558 7 log.go:172] (0xc001ee6320) (1) Data frame sent I0415 23:52:23.240574 7 log.go:172] (0xc002b360b0) (0xc001ee6320) Stream removed, broadcasting: 1 I0415 23:52:23.240673 7 log.go:172] (0xc002b360b0) (0xc001ee6320) Stream removed, broadcasting: 1 I0415 23:52:23.240688 7 log.go:172] (0xc002b360b0) (0xc001dce000) Stream removed, broadcasting: 3 I0415 23:52:23.240738 7 log.go:172] (0xc002b360b0) Go away received I0415 23:52:23.240918 7 log.go:172] (0xc002b360b0) (0xc001ee6460) Stream removed, broadcasting: 5 Apr 15 23:52:23.240: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:52:23.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-540" for this suite. • [SLOW TEST:26.414 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":902,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:52:23.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-6706 STEP: creating replication controller nodeport-test in namespace services-6706 I0415 23:52:23.348365 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6706, replica count: 2 I0415 23:52:26.398788 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0415 23:52:29.399001 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 15 23:52:29.399: INFO: Creating new exec pod Apr 15 23:52:34.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6706 execpodcqw6r -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 15 23:52:34.790: INFO: stderr: "I0415 23:52:34.688902 628 log.go:172] (0xc00043ea50) (0xc0009521e0) Create stream\nI0415 23:52:34.688965 628 log.go:172] (0xc00043ea50) (0xc0009521e0) Stream added, broadcasting: 1\nI0415 23:52:34.691831 628 log.go:172] (0xc00043ea50) Reply frame received for 1\nI0415 23:52:34.691881 628 log.go:172] (0xc00043ea50) (0xc0009bc000) Create stream\nI0415 23:52:34.691898 628 log.go:172] (0xc00043ea50) (0xc0009bc000) Stream added, broadcasting: 3\nI0415 23:52:34.692960 628 log.go:172] (0xc00043ea50) Reply frame received for 3\nI0415 23:52:34.693007 628 log.go:172] (0xc00043ea50) (0xc000952280) Create stream\nI0415 23:52:34.693021 628 log.go:172] (0xc00043ea50) (0xc000952280) Stream added, broadcasting: 5\nI0415 23:52:34.694291 628 log.go:172] (0xc00043ea50) Reply frame received for 5\nI0415 23:52:34.782021 628 log.go:172] (0xc00043ea50) Data frame received for 5\nI0415 23:52:34.782040 628 log.go:172] (0xc000952280) (5) Data frame handling\nI0415 23:52:34.782057 628 log.go:172] (0xc000952280) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0415 23:52:34.782455 628 log.go:172] (0xc00043ea50) Data frame received for 5\nI0415 23:52:34.782487 628 log.go:172] (0xc000952280) (5) Data frame handling\nI0415 23:52:34.782513 628 log.go:172] (0xc000952280) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0415 23:52:34.782918 628 log.go:172] (0xc00043ea50) Data frame received for 5\nI0415 23:52:34.782949 628 log.go:172] (0xc000952280) (5) Data frame handling\nI0415 23:52:34.783056 628 log.go:172] (0xc00043ea50) Data frame received for 3\nI0415 23:52:34.783077 628 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0415 23:52:34.785250 628 log.go:172] (0xc00043ea50) Data frame received for 1\nI0415 23:52:34.785276 628 log.go:172] (0xc0009521e0) (1) Data frame handling\nI0415 23:52:34.785286 628 log.go:172] (0xc0009521e0) (1) Data frame sent\nI0415 23:52:34.785299 628 log.go:172] (0xc00043ea50) (0xc0009521e0) Stream removed, broadcasting: 1\nI0415 23:52:34.785405 628 log.go:172] (0xc00043ea50) Go away received\nI0415 23:52:34.785539 628 log.go:172] (0xc00043ea50) (0xc0009521e0) Stream removed, broadcasting: 1\nI0415 23:52:34.785551 628 log.go:172] (0xc00043ea50) (0xc0009bc000) Stream removed, broadcasting: 3\nI0415 23:52:34.785558 628 log.go:172] (0xc00043ea50) (0xc000952280) Stream removed, broadcasting: 5\n" Apr 15 23:52:34.790: INFO: stdout: "" Apr 15 23:52:34.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6706 execpodcqw6r -- /bin/sh -x -c nc -zv -t -w 2 10.96.46.133 80' Apr 15 23:52:34.989: INFO: stderr: "I0415 23:52:34.909456 650 log.go:172] (0xc000a266e0) (0xc0009b01e0) Create stream\nI0415 23:52:34.909599 650 log.go:172] (0xc000a266e0) (0xc0009b01e0) Stream added, broadcasting: 1\nI0415 23:52:34.922172 650 log.go:172] (0xc000a266e0) Reply frame received for 1\nI0415 23:52:34.922228 650 log.go:172] (0xc000a266e0) (0xc0009b0320) Create stream\nI0415 23:52:34.922242 650 log.go:172] (0xc000a266e0) (0xc0009b0320) Stream added, broadcasting: 3\nI0415 23:52:34.923181 650 log.go:172] (0xc000a266e0) Reply frame received for 3\nI0415 23:52:34.923220 650 log.go:172] (0xc000a266e0) (0xc0003d4960) Create stream\nI0415 23:52:34.923228 650 log.go:172] (0xc000a266e0) (0xc0003d4960) Stream added, broadcasting: 5\nI0415 23:52:34.924149 650 log.go:172] (0xc000a266e0) Reply frame received for 5\nI0415 23:52:34.983537 650 log.go:172] (0xc000a266e0) Data frame received for 3\nI0415 23:52:34.983583 650 log.go:172] (0xc0009b0320) (3) Data frame handling\nI0415 23:52:34.983619 650 log.go:172] (0xc000a266e0) Data frame received for 5\nI0415 23:52:34.983636 650 log.go:172] (0xc0003d4960) (5) Data frame handling\nI0415 23:52:34.983657 650 log.go:172] (0xc0003d4960) (5) Data frame sent\nI0415 23:52:34.983672 650 log.go:172] (0xc000a266e0) Data frame received for 5\nI0415 23:52:34.983681 650 log.go:172] (0xc0003d4960) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.46.133 80\nConnection to 10.96.46.133 80 port [tcp/http] succeeded!\nI0415 23:52:34.984914 650 log.go:172] (0xc000a266e0) Data frame received for 1\nI0415 23:52:34.984939 650 log.go:172] (0xc0009b01e0) (1) Data frame handling\nI0415 23:52:34.984963 650 log.go:172] (0xc0009b01e0) (1) Data frame sent\nI0415 23:52:34.984979 650 log.go:172] (0xc000a266e0) (0xc0009b01e0) Stream removed, broadcasting: 1\nI0415 23:52:34.985073 650 log.go:172] (0xc000a266e0) Go away received\nI0415 23:52:34.985550 650 log.go:172] (0xc000a266e0) (0xc0009b01e0) Stream removed, broadcasting: 1\nI0415 23:52:34.985572 650 log.go:172] (0xc000a266e0) (0xc0009b0320) Stream removed, broadcasting: 3\nI0415 23:52:34.985583 650 log.go:172] (0xc000a266e0) (0xc0003d4960) Stream removed, broadcasting: 5\n" Apr 15 23:52:34.990: INFO: stdout: "" Apr 15 23:52:34.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6706 execpodcqw6r -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32200' Apr 15 23:52:35.213: INFO: stderr: "I0415 23:52:35.132441 672 log.go:172] (0xc00003bd90) (0xc000671360) Create stream\nI0415 23:52:35.132501 672 log.go:172] (0xc00003bd90) (0xc000671360) Stream added, broadcasting: 1\nI0415 23:52:35.135040 672 log.go:172] (0xc00003bd90) Reply frame received for 1\nI0415 23:52:35.135088 672 log.go:172] (0xc00003bd90) (0xc000671540) Create stream\nI0415 23:52:35.135101 672 log.go:172] (0xc00003bd90) (0xc000671540) Stream added, broadcasting: 3\nI0415 23:52:35.136130 672 log.go:172] (0xc00003bd90) Reply frame received for 3\nI0415 23:52:35.136164 672 log.go:172] (0xc00003bd90) (0xc0006715e0) Create stream\nI0415 23:52:35.136174 672 log.go:172] (0xc00003bd90) (0xc0006715e0) Stream added, broadcasting: 5\nI0415 23:52:35.137091 672 log.go:172] (0xc00003bd90) Reply frame received for 5\nI0415 23:52:35.207188 672 log.go:172] (0xc00003bd90) Data frame received for 3\nI0415 23:52:35.207271 672 log.go:172] (0xc00003bd90) Data frame received for 5\nI0415 23:52:35.207329 672 log.go:172] (0xc0006715e0) (5) Data frame handling\nI0415 23:52:35.207352 672 log.go:172] (0xc0006715e0) (5) Data frame sent\nI0415 23:52:35.207366 672 log.go:172] (0xc00003bd90) Data frame received for 5\nI0415 23:52:35.207375 672 log.go:172] (0xc0006715e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32200\nConnection to 172.17.0.13 32200 port [tcp/32200] succeeded!\nI0415 23:52:35.207414 672 log.go:172] (0xc000671540) (3) Data frame handling\nI0415 23:52:35.208995 672 log.go:172] (0xc00003bd90) Data frame received for 1\nI0415 23:52:35.209023 672 log.go:172] (0xc000671360) (1) Data frame handling\nI0415 23:52:35.209038 672 log.go:172] (0xc000671360) (1) Data frame sent\nI0415 23:52:35.209060 672 log.go:172] (0xc00003bd90) (0xc000671360) Stream removed, broadcasting: 1\nI0415 23:52:35.209071 672 log.go:172] (0xc00003bd90) Go away received\nI0415 23:52:35.209572 672 log.go:172] (0xc00003bd90) (0xc000671360) Stream removed, broadcasting: 1\nI0415 23:52:35.209590 672 log.go:172] (0xc00003bd90) (0xc000671540) Stream removed, broadcasting: 3\nI0415 23:52:35.209599 672 log.go:172] (0xc00003bd90) (0xc0006715e0) Stream removed, broadcasting: 5\n" Apr 15 23:52:35.213: INFO: stdout: "" Apr 15 23:52:35.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6706 execpodcqw6r -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32200' Apr 15 23:52:35.400: INFO: stderr: "I0415 23:52:35.335285 695 log.go:172] (0xc000a340b0) (0xc000ab8000) Create stream\nI0415 23:52:35.335339 695 log.go:172] (0xc000a340b0) (0xc000ab8000) Stream added, broadcasting: 1\nI0415 23:52:35.336912 695 log.go:172] (0xc000a340b0) Reply frame received for 1\nI0415 23:52:35.336944 695 log.go:172] (0xc000a340b0) (0xc000ab80a0) Create stream\nI0415 23:52:35.336953 695 log.go:172] (0xc000a340b0) (0xc000ab80a0) Stream added, broadcasting: 3\nI0415 23:52:35.338013 695 log.go:172] (0xc000a340b0) Reply frame received for 3\nI0415 23:52:35.338040 695 log.go:172] (0xc000a340b0) (0xc000ab8140) Create stream\nI0415 23:52:35.338058 695 log.go:172] (0xc000a340b0) (0xc000ab8140) Stream added, broadcasting: 5\nI0415 23:52:35.338944 695 log.go:172] (0xc000a340b0) Reply frame received for 5\nI0415 23:52:35.394684 695 log.go:172] (0xc000a340b0) Data frame received for 3\nI0415 23:52:35.394736 695 log.go:172] (0xc000ab80a0) (3) Data frame handling\nI0415 23:52:35.394761 695 log.go:172] (0xc000a340b0) Data frame received for 5\nI0415 23:52:35.394773 695 log.go:172] (0xc000ab8140) (5) Data frame handling\nI0415 23:52:35.394786 695 log.go:172] (0xc000ab8140) (5) Data frame sent\nI0415 23:52:35.394803 695 log.go:172] (0xc000a340b0) Data frame received for 5\nI0415 23:52:35.394825 695 log.go:172] (0xc000ab8140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32200\nConnection to 172.17.0.12 32200 port [tcp/32200] succeeded!\nI0415 23:52:35.396367 695 log.go:172] (0xc000a340b0) Data frame received for 1\nI0415 23:52:35.396387 695 log.go:172] (0xc000ab8000) (1) Data frame handling\nI0415 23:52:35.396406 695 log.go:172] (0xc000ab8000) (1) Data frame sent\nI0415 23:52:35.396423 695 log.go:172] (0xc000a340b0) (0xc000ab8000) Stream removed, broadcasting: 1\nI0415 23:52:35.396465 695 log.go:172] (0xc000a340b0) Go away received\nI0415 23:52:35.396769 695 log.go:172] (0xc000a340b0) (0xc000ab8000) Stream removed, broadcasting: 1\nI0415 23:52:35.396788 695 log.go:172] (0xc000a340b0) (0xc000ab80a0) Stream removed, broadcasting: 3\nI0415 23:52:35.396799 695 log.go:172] (0xc000a340b0) (0xc000ab8140) Stream removed, broadcasting: 5\n" Apr 15 23:52:35.400: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:52:35.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6706" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.159 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":60,"skipped":915,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:52:35.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 15 23:52:35.498: INFO: Waiting up to 5m0s for pod "downwardapi-volume-439ddec4-7d4e-4cd5-9e9d-5b4bad2a6972" in namespace "projected-5112" to be "Succeeded or Failed" Apr 15 23:52:35.514: INFO: Pod "downwardapi-volume-439ddec4-7d4e-4cd5-9e9d-5b4bad2a6972": Phase="Pending", Reason="", readiness=false. Elapsed: 16.206376ms Apr 15 23:52:37.520: INFO: Pod "downwardapi-volume-439ddec4-7d4e-4cd5-9e9d-5b4bad2a6972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021719634s Apr 15 23:52:39.524: INFO: Pod "downwardapi-volume-439ddec4-7d4e-4cd5-9e9d-5b4bad2a6972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026045911s STEP: Saw pod success Apr 15 23:52:39.524: INFO: Pod "downwardapi-volume-439ddec4-7d4e-4cd5-9e9d-5b4bad2a6972" satisfied condition "Succeeded or Failed" Apr 15 23:52:39.527: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-439ddec4-7d4e-4cd5-9e9d-5b4bad2a6972 container client-container: STEP: delete the pod Apr 15 23:52:39.586: INFO: Waiting for pod downwardapi-volume-439ddec4-7d4e-4cd5-9e9d-5b4bad2a6972 to disappear Apr 15 23:52:39.597: INFO: Pod downwardapi-volume-439ddec4-7d4e-4cd5-9e9d-5b4bad2a6972 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:52:39.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5112" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":957,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:52:39.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-8854 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8854 STEP: creating replication controller externalsvc in namespace services-8854 I0415 23:52:39.771127 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8854, replica count: 2 I0415 23:52:42.821607 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0415 23:52:45.821878 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 15 23:52:45.859: INFO: Creating new exec pod Apr 15 23:52:49.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8854 execpodtv9hr -- /bin/sh -x -c nslookup nodeport-service' Apr 15 23:52:50.153: INFO: stderr: "I0415 23:52:50.037653 716 log.go:172] (0xc0009ab970) (0xc0009ee8c0) Create stream\nI0415 23:52:50.037738 716 log.go:172] (0xc0009ab970) (0xc0009ee8c0) Stream added, broadcasting: 1\nI0415 23:52:50.042756 716 log.go:172] (0xc0009ab970) Reply frame received for 1\nI0415 23:52:50.042835 716 log.go:172] (0xc0009ab970) (0xc00067b5e0) Create stream\nI0415 23:52:50.042863 716 log.go:172] (0xc0009ab970) (0xc00067b5e0) Stream added, broadcasting: 3\nI0415 23:52:50.044789 716 log.go:172] (0xc0009ab970) Reply frame received for 3\nI0415 23:52:50.044842 716 log.go:172] (0xc0009ab970) (0xc00056ea00) Create stream\nI0415 23:52:50.044856 716 log.go:172] (0xc0009ab970) (0xc00056ea00) Stream added, broadcasting: 5\nI0415 23:52:50.046173 716 log.go:172] (0xc0009ab970) Reply frame received for 5\nI0415 23:52:50.143961 716 log.go:172] (0xc0009ab970) Data frame received for 5\nI0415 23:52:50.143986 716 log.go:172] (0xc00056ea00) (5) Data frame handling\nI0415 23:52:50.144003 716 log.go:172] (0xc00056ea00) (5) Data frame sent\n+ nslookup nodeport-service\nI0415 23:52:50.148021 716 log.go:172] (0xc0009ab970) Data frame received for 3\nI0415 23:52:50.148043 716 log.go:172] (0xc00067b5e0) (3) Data frame handling\nI0415 23:52:50.148064 716 log.go:172] (0xc00067b5e0) (3) Data frame sent\nI0415 23:52:50.148574 716 log.go:172] (0xc0009ab970) Data frame received for 3\nI0415 23:52:50.148588 716 log.go:172] (0xc00067b5e0) (3) Data frame handling\nI0415 23:52:50.148603 716 log.go:172] (0xc00067b5e0) (3) Data frame sent\nI0415 23:52:50.148830 716 log.go:172] (0xc0009ab970) Data frame received for 5\nI0415 23:52:50.148851 716 log.go:172] (0xc00056ea00) (5) Data frame handling\nI0415 23:52:50.148866 716 log.go:172] (0xc0009ab970) Data frame received for 3\nI0415 23:52:50.148873 716 log.go:172] (0xc00067b5e0) (3) Data frame handling\nI0415 23:52:50.150172 716 log.go:172] (0xc0009ab970) Data frame received for 1\nI0415 23:52:50.150184 716 log.go:172] (0xc0009ee8c0) (1) Data frame handling\nI0415 23:52:50.150195 716 log.go:172] (0xc0009ee8c0) (1) Data frame sent\nI0415 23:52:50.150203 716 log.go:172] (0xc0009ab970) (0xc0009ee8c0) Stream removed, broadcasting: 1\nI0415 23:52:50.150210 716 log.go:172] (0xc0009ab970) Go away received\nI0415 23:52:50.150439 716 log.go:172] (0xc0009ab970) (0xc0009ee8c0) Stream removed, broadcasting: 1\nI0415 23:52:50.150450 716 log.go:172] (0xc0009ab970) (0xc00067b5e0) Stream removed, broadcasting: 3\nI0415 23:52:50.150456 716 log.go:172] (0xc0009ab970) (0xc00056ea00) Stream removed, broadcasting: 5\n" Apr 15 23:52:50.153: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-8854.svc.cluster.local\tcanonical name = externalsvc.services-8854.svc.cluster.local.\nName:\texternalsvc.services-8854.svc.cluster.local\nAddress: 10.96.14.156\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8854, will wait for the garbage collector to delete the pods Apr 15 23:52:50.211: INFO: Deleting ReplicationController externalsvc took: 5.800235ms Apr 15 23:52:50.512: INFO: Terminating ReplicationController externalsvc pods took: 300.252796ms Apr 15 23:53:03.035: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:53:03.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8854" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:23.476 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":62,"skipped":981,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:53:03.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 15 23:53:03.157: INFO: Waiting up to 5m0s for pod "var-expansion-22f0d3e5-5811-454b-bc5f-c96bedd8a6ad" in namespace "var-expansion-5286" to be "Succeeded or Failed" Apr 15 23:53:03.160: INFO: Pod "var-expansion-22f0d3e5-5811-454b-bc5f-c96bedd8a6ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.508566ms Apr 15 23:53:05.206: INFO: Pod "var-expansion-22f0d3e5-5811-454b-bc5f-c96bedd8a6ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048870308s Apr 15 23:53:07.211: INFO: Pod "var-expansion-22f0d3e5-5811-454b-bc5f-c96bedd8a6ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053210984s STEP: Saw pod success Apr 15 23:53:07.211: INFO: Pod "var-expansion-22f0d3e5-5811-454b-bc5f-c96bedd8a6ad" satisfied condition "Succeeded or Failed" Apr 15 23:53:07.214: INFO: Trying to get logs from node latest-worker pod var-expansion-22f0d3e5-5811-454b-bc5f-c96bedd8a6ad container dapi-container: STEP: delete the pod Apr 15 23:53:07.338: INFO: Waiting for pod var-expansion-22f0d3e5-5811-454b-bc5f-c96bedd8a6ad to disappear Apr 15 23:53:07.348: INFO: Pod var-expansion-22f0d3e5-5811-454b-bc5f-c96bedd8a6ad no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:53:07.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5286" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":982,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:53:07.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 15 23:53:07.464: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 15 23:53:18.002: INFO: >>> kubeConfig: /root/.kube/config Apr 15 23:53:20.928: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:53:31.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-307" for this suite. • [SLOW TEST:24.080 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":64,"skipped":985,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:53:31.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3240 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 15 23:53:31.508: INFO: Found 0 stateful pods, waiting for 3 Apr 15 23:53:41.512: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 15 23:53:41.512: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 15 23:53:41.512: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 15 23:53:41.537: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 15 23:53:51.587: INFO: Updating stateful set ss2 Apr 15 23:53:51.609: INFO: Waiting for Pod statefulset-3240/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 15 23:54:01.617: INFO: Waiting for Pod statefulset-3240/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 15 23:54:11.940: INFO: Found 2 stateful pods, waiting for 3 Apr 15 23:54:21.945: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 15 23:54:21.945: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 15 23:54:21.945: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 15 23:54:21.968: INFO: Updating stateful set ss2 Apr 15 23:54:21.979: INFO: Waiting for Pod statefulset-3240/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 15 23:54:31.987: INFO: Waiting for Pod statefulset-3240/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 15 23:54:42.005: INFO: Updating stateful set ss2 Apr 15 23:54:42.020: INFO: Waiting for StatefulSet statefulset-3240/ss2 to complete update Apr 15 23:54:42.020: INFO: Waiting for Pod statefulset-3240/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 15 23:54:52.027: INFO: Waiting for StatefulSet statefulset-3240/ss2 to complete update Apr 15 23:54:52.027: INFO: Waiting for Pod statefulset-3240/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 15 23:55:02.050: INFO: Deleting all statefulset in ns statefulset-3240 Apr 15 23:55:02.053: INFO: Scaling statefulset ss2 to 0 Apr 15 23:55:32.076: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 23:55:32.082: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:55:32.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3240" for this suite. • [SLOW TEST:120.659 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":65,"skipped":996,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:55:32.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 15 23:55:32.192: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71d47d10-cfad-4296-944c-1b969da9b752" in namespace "downward-api-8869" to be "Succeeded or Failed" Apr 15 23:55:32.210: INFO: Pod "downwardapi-volume-71d47d10-cfad-4296-944c-1b969da9b752": Phase="Pending", Reason="", readiness=false. Elapsed: 18.17292ms Apr 15 23:55:34.214: INFO: Pod "downwardapi-volume-71d47d10-cfad-4296-944c-1b969da9b752": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02212399s Apr 15 23:55:36.218: INFO: Pod "downwardapi-volume-71d47d10-cfad-4296-944c-1b969da9b752": Phase="Running", Reason="", readiness=true. Elapsed: 4.026420119s Apr 15 23:55:38.244: INFO: Pod "downwardapi-volume-71d47d10-cfad-4296-944c-1b969da9b752": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052272699s STEP: Saw pod success Apr 15 23:55:38.244: INFO: Pod "downwardapi-volume-71d47d10-cfad-4296-944c-1b969da9b752" satisfied condition "Succeeded or Failed" Apr 15 23:55:38.247: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-71d47d10-cfad-4296-944c-1b969da9b752 container client-container: STEP: delete the pod Apr 15 23:55:38.285: INFO: Waiting for pod downwardapi-volume-71d47d10-cfad-4296-944c-1b969da9b752 to disappear Apr 15 23:55:38.288: INFO: Pod downwardapi-volume-71d47d10-cfad-4296-944c-1b969da9b752 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:55:38.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8869" for this suite. • [SLOW TEST:6.198 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":998,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:55:38.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-4981/configmap-test-5537cb01-7847-49c8-bab3-9300fab38780 STEP: Creating a pod to test consume configMaps Apr 15 23:55:38.459: INFO: Waiting up to 5m0s for pod "pod-configmaps-d397d8b0-038a-4dc5-851f-e3a71b2de7c1" in namespace "configmap-4981" to be "Succeeded or Failed" Apr 15 23:55:38.469: INFO: Pod "pod-configmaps-d397d8b0-038a-4dc5-851f-e3a71b2de7c1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.407626ms Apr 15 23:55:40.473: INFO: Pod "pod-configmaps-d397d8b0-038a-4dc5-851f-e3a71b2de7c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013522976s Apr 15 23:55:42.477: INFO: Pod "pod-configmaps-d397d8b0-038a-4dc5-851f-e3a71b2de7c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017291788s STEP: Saw pod success Apr 15 23:55:42.477: INFO: Pod "pod-configmaps-d397d8b0-038a-4dc5-851f-e3a71b2de7c1" satisfied condition "Succeeded or Failed" Apr 15 23:55:42.480: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d397d8b0-038a-4dc5-851f-e3a71b2de7c1 container env-test: STEP: delete the pod Apr 15 23:55:42.511: INFO: Waiting for pod pod-configmaps-d397d8b0-038a-4dc5-851f-e3a71b2de7c1 to disappear Apr 15 23:55:42.523: INFO: Pod pod-configmaps-d397d8b0-038a-4dc5-851f-e3a71b2de7c1 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:55:42.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4981" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1003,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:55:42.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:55:42.571: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:55:43.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7061" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":68,"skipped":1005,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:55:43.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:55:43.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6661" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":69,"skipped":1034,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:55:43.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-e29f3b13-7ddb-4a4b-a204-8cb807b88dee STEP: Creating secret with name s-test-opt-upd-57d4b84d-b4f7-490c-b7e6-d4d4b8d9e808 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e29f3b13-7ddb-4a4b-a204-8cb807b88dee STEP: Updating secret s-test-opt-upd-57d4b84d-b4f7-490c-b7e6-d4d4b8d9e808 STEP: Creating secret with name s-test-opt-create-f9d3ee58-87f8-44b5-99ff-80c6836f0e04 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:55:54.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6061" for this suite. • [SLOW TEST:10.250 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1061,"failed":0} SS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:55:54.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6692 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6692;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6692 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6692;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6692.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6692.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6692.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6692.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6692.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6692.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6692.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6692.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6692.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6692.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6692.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 30.62.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.62.30_udp@PTR;check="$$(dig +tcp +noall +answer +search 30.62.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.62.30_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6692 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6692;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6692 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6692;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6692.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6692.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6692.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6692.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6692.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6692.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6692.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6692.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6692.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6692.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6692.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6692.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 30.62.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.62.30_udp@PTR;check="$$(dig +tcp +noall +answer +search 30.62.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.62.30_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 15 23:56:00.219: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.222: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.224: INFO: Unable to read wheezy_udp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.227: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.229: INFO: Unable to read wheezy_udp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.232: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.234: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.237: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.284: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.287: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.291: INFO: Unable to read jessie_udp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.294: INFO: Unable to read jessie_tcp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.297: INFO: Unable to read jessie_udp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.299: INFO: Unable to read jessie_tcp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.302: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.304: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:00.319: INFO: Lookups using dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6692 wheezy_tcp@dns-test-service.dns-6692 wheezy_udp@dns-test-service.dns-6692.svc wheezy_tcp@dns-test-service.dns-6692.svc wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6692 jessie_tcp@dns-test-service.dns-6692 jessie_udp@dns-test-service.dns-6692.svc jessie_tcp@dns-test-service.dns-6692.svc jessie_udp@_http._tcp.dns-test-service.dns-6692.svc jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc] Apr 15 23:56:05.323: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.330: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.341: INFO: Unable to read wheezy_udp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.371: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.374: INFO: Unable to read wheezy_udp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.377: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.379: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.381: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.400: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.401: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.403: INFO: Unable to read jessie_udp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.405: INFO: Unable to read jessie_tcp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.407: INFO: Unable to read jessie_udp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.408: INFO: Unable to read jessie_tcp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.410: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.412: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:05.425: INFO: Lookups using dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6692 wheezy_tcp@dns-test-service.dns-6692 wheezy_udp@dns-test-service.dns-6692.svc wheezy_tcp@dns-test-service.dns-6692.svc wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6692 jessie_tcp@dns-test-service.dns-6692 jessie_udp@dns-test-service.dns-6692.svc jessie_tcp@dns-test-service.dns-6692.svc jessie_udp@_http._tcp.dns-test-service.dns-6692.svc jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc] Apr 15 23:56:10.324: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.327: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.330: INFO: Unable to read wheezy_udp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.333: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.336: INFO: Unable to read wheezy_udp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.339: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.342: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.345: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.364: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.367: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.369: INFO: Unable to read jessie_udp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.371: INFO: Unable to read jessie_tcp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.373: INFO: Unable to read jessie_udp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.375: INFO: Unable to read jessie_tcp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.378: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.380: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:10.397: INFO: Lookups using dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6692 wheezy_tcp@dns-test-service.dns-6692 wheezy_udp@dns-test-service.dns-6692.svc wheezy_tcp@dns-test-service.dns-6692.svc wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6692 jessie_tcp@dns-test-service.dns-6692 jessie_udp@dns-test-service.dns-6692.svc jessie_tcp@dns-test-service.dns-6692.svc jessie_udp@_http._tcp.dns-test-service.dns-6692.svc jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc] Apr 15 23:56:15.324: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.328: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.332: INFO: Unable to read wheezy_udp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.336: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.339: INFO: Unable to read wheezy_udp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.343: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.346: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.350: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.368: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.370: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.373: INFO: Unable to read jessie_udp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.376: INFO: Unable to read jessie_tcp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.379: INFO: Unable to read jessie_udp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.382: INFO: Unable to read jessie_tcp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.385: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.388: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:15.406: INFO: Lookups using dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6692 wheezy_tcp@dns-test-service.dns-6692 wheezy_udp@dns-test-service.dns-6692.svc wheezy_tcp@dns-test-service.dns-6692.svc wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6692 jessie_tcp@dns-test-service.dns-6692 jessie_udp@dns-test-service.dns-6692.svc jessie_tcp@dns-test-service.dns-6692.svc jessie_udp@_http._tcp.dns-test-service.dns-6692.svc jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc] Apr 15 23:56:20.324: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.327: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.332: INFO: Unable to read wheezy_udp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.335: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.339: INFO: Unable to read wheezy_udp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.341: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.343: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.346: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.367: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.370: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.372: INFO: Unable to read jessie_udp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.375: INFO: Unable to read jessie_tcp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.378: INFO: Unable to read jessie_udp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.381: INFO: Unable to read jessie_tcp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.385: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.388: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:20.407: INFO: Lookups using dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6692 wheezy_tcp@dns-test-service.dns-6692 wheezy_udp@dns-test-service.dns-6692.svc wheezy_tcp@dns-test-service.dns-6692.svc wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6692 jessie_tcp@dns-test-service.dns-6692 jessie_udp@dns-test-service.dns-6692.svc jessie_tcp@dns-test-service.dns-6692.svc jessie_udp@_http._tcp.dns-test-service.dns-6692.svc jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc] Apr 15 23:56:25.323: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.327: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.331: INFO: Unable to read wheezy_udp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.334: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.337: INFO: Unable to read wheezy_udp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.340: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.343: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.346: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.365: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.368: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.371: INFO: Unable to read jessie_udp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.373: INFO: Unable to read jessie_tcp@dns-test-service.dns-6692 from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.376: INFO: Unable to read jessie_udp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.379: INFO: Unable to read jessie_tcp@dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.383: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.386: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc from pod dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d: the server could not find the requested resource (get pods dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d) Apr 15 23:56:25.403: INFO: Lookups using dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6692 wheezy_tcp@dns-test-service.dns-6692 wheezy_udp@dns-test-service.dns-6692.svc wheezy_tcp@dns-test-service.dns-6692.svc wheezy_udp@_http._tcp.dns-test-service.dns-6692.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6692.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6692 jessie_tcp@dns-test-service.dns-6692 jessie_udp@dns-test-service.dns-6692.svc jessie_tcp@dns-test-service.dns-6692.svc jessie_udp@_http._tcp.dns-test-service.dns-6692.svc jessie_tcp@_http._tcp.dns-test-service.dns-6692.svc] Apr 15 23:56:30.405: INFO: DNS probes using dns-6692/dns-test-b3108443-d680-49bf-b4ed-0bda7a01a30d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:56:30.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6692" for this suite. • [SLOW TEST:36.658 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":71,"skipped":1063,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:56:30.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 15 23:56:31.429: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 15 23:56:33.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591791, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591791, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591791, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722591791, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 15 23:56:36.470: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 15 23:56:36.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 15 23:56:37.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2009" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.179 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":72,"skipped":1101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 15 23:56:37.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7275 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-7275 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7275 Apr 15 23:56:37.954: INFO: Found 0 stateful pods, waiting for 1 Apr 15 23:56:47.958: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 15 23:56:47.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 15 23:56:48.208: INFO: stderr: "I0415 23:56:48.091570 736 log.go:172] (0xc000028a50) (0xc00068f360) Create stream\nI0415 23:56:48.091627 736 log.go:172] (0xc000028a50) (0xc00068f360) Stream added, broadcasting: 1\nI0415 23:56:48.094535 736 log.go:172] (0xc000028a50) Reply frame received for 1\nI0415 23:56:48.094573 736 log.go:172] (0xc000028a50) (0xc0005c94a0) Create stream\nI0415 23:56:48.094585 736 log.go:172] (0xc000028a50) (0xc0005c94a0) Stream added, broadcasting: 3\nI0415 23:56:48.095697 736 log.go:172] (0xc000028a50) Reply frame received for 3\nI0415 23:56:48.095737 736 log.go:172] (0xc000028a50) (0xc00068f400) Create stream\nI0415 23:56:48.095750 736 log.go:172] (0xc000028a50) (0xc00068f400) Stream added, broadcasting: 5\nI0415 23:56:48.097074 736 log.go:172] (0xc000028a50) Reply frame received for 5\nI0415 23:56:48.168876 736 log.go:172] (0xc000028a50) Data frame received for 5\nI0415 23:56:48.168914 736 log.go:172] (0xc00068f400) (5) Data frame handling\nI0415 23:56:48.168938 736 log.go:172] (0xc00068f400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0415 23:56:48.200413 736 log.go:172] (0xc000028a50) Data frame received for 3\nI0415 23:56:48.200441 736 log.go:172] (0xc0005c94a0) (3) Data frame handling\nI0415 23:56:48.200454 736 log.go:172] (0xc0005c94a0) (3) Data frame sent\nI0415 23:56:48.200463 736 log.go:172] (0xc000028a50) Data frame received for 3\nI0415 23:56:48.200475 736 log.go:172] (0xc0005c94a0) (3) Data frame handling\nI0415 23:56:48.200803 736 log.go:172] (0xc000028a50) Data frame received for 5\nI0415 23:56:48.200820 736 log.go:172] (0xc00068f400) (5) Data frame handling\nI0415 23:56:48.202552 736 log.go:172] (0xc000028a50) Data frame received for 1\nI0415 23:56:48.202581 736 log.go:172] (0xc00068f360) (1) Data frame handling\nI0415 23:56:48.202601 736 log.go:172] (0xc00068f360) (1) Data frame sent\nI0415 23:56:48.202614 736 log.go:172] (0xc000028a50) (0xc00068f360) Stream removed, broadcasting: 1\nI0415 23:56:48.202718 736 log.go:172] (0xc000028a50) Go away received\nI0415 23:56:48.203033 736 log.go:172] (0xc000028a50) (0xc00068f360) Stream removed, broadcasting: 1\nI0415 23:56:48.203055 736 log.go:172] (0xc000028a50) (0xc0005c94a0) Stream removed, broadcasting: 3\nI0415 23:56:48.203066 736 log.go:172] (0xc000028a50) (0xc00068f400) Stream removed, broadcasting: 5\n" Apr 15 23:56:48.208: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 15 23:56:48.208: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 15 23:56:48.211: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 15 23:56:58.216: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 15 23:56:58.216: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 23:56:58.252: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 23:56:58.252: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:37 +0000 UTC }] Apr 15 23:56:58.252: INFO: Apr 15 23:56:58.252: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 15 23:56:59.258: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.973973356s Apr 15 23:57:00.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.968358576s Apr 15 23:57:01.759: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.955783746s Apr 15 23:57:02.763: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.466750776s Apr 15 23:57:03.768: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.463157614s Apr 15 23:57:04.773: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.457543287s Apr 15 23:57:05.778: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.452578065s Apr 15 23:57:06.783: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.447848852s Apr 15 23:57:07.788: INFO: Verifying statefulset ss doesn't scale past 3 for another 442.533565ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7275 Apr 15 23:57:08.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:57:09.033: INFO: stderr: "I0415 23:57:08.920831 757 log.go:172] (0xc00003a790) (0xc0006d97c0) Create stream\nI0415 23:57:08.920911 757 log.go:172] (0xc00003a790) (0xc0006d97c0) Stream added, broadcasting: 1\nI0415 23:57:08.924279 757 log.go:172] (0xc00003a790) Reply frame received for 1\nI0415 23:57:08.924322 757 log.go:172] (0xc00003a790) (0xc00058d7c0) Create stream\nI0415 23:57:08.924337 757 log.go:172] (0xc00003a790) (0xc00058d7c0) Stream added, broadcasting: 3\nI0415 23:57:08.925563 757 log.go:172] (0xc00003a790) Reply frame received for 3\nI0415 23:57:08.925600 757 log.go:172] (0xc00003a790) (0xc0006d9860) Create stream\nI0415 23:57:08.925615 757 log.go:172] (0xc00003a790) (0xc0006d9860) Stream added, broadcasting: 5\nI0415 23:57:08.926501 757 log.go:172] (0xc00003a790) Reply frame received for 5\nI0415 23:57:09.026121 757 log.go:172] (0xc00003a790) Data frame received for 5\nI0415 23:57:09.026166 757 log.go:172] (0xc0006d9860) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0415 23:57:09.026201 757 log.go:172] (0xc00003a790) Data frame received for 3\nI0415 23:57:09.026245 757 log.go:172] (0xc00058d7c0) (3) Data frame handling\nI0415 23:57:09.026258 757 log.go:172] (0xc00058d7c0) (3) Data frame sent\nI0415 23:57:09.026276 757 log.go:172] (0xc00003a790) Data frame received for 3\nI0415 23:57:09.026286 757 log.go:172] (0xc00058d7c0) (3) Data frame handling\nI0415 23:57:09.026300 757 log.go:172] (0xc0006d9860) (5) Data frame sent\nI0415 23:57:09.026316 757 log.go:172] (0xc00003a790) Data frame received for 5\nI0415 23:57:09.026328 757 log.go:172] (0xc0006d9860) (5) Data frame handling\nI0415 23:57:09.027990 757 log.go:172] (0xc00003a790) Data frame received for 1\nI0415 23:57:09.028018 757 log.go:172] (0xc0006d97c0) (1) Data frame handling\nI0415 23:57:09.028039 757 log.go:172] (0xc0006d97c0) (1) Data frame sent\nI0415 23:57:09.028068 757 log.go:172] (0xc00003a790) (0xc0006d97c0) Stream removed, broadcasting: 1\nI0415 23:57:09.028095 757 log.go:172] (0xc00003a790) Go away received\nI0415 23:57:09.028579 757 log.go:172] (0xc00003a790) (0xc0006d97c0) Stream removed, broadcasting: 1\nI0415 23:57:09.028608 757 log.go:172] (0xc00003a790) (0xc00058d7c0) Stream removed, broadcasting: 3\nI0415 23:57:09.028622 757 log.go:172] (0xc00003a790) (0xc0006d9860) Stream removed, broadcasting: 5\n" Apr 15 23:57:09.033: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 15 23:57:09.033: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 15 23:57:09.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:57:09.222: INFO: stderr: "I0415 23:57:09.158370 778 log.go:172] (0xc0009d88f0) (0xc0007ca320) Create stream\nI0415 23:57:09.158423 778 log.go:172] (0xc0009d88f0) (0xc0007ca320) Stream added, broadcasting: 1\nI0415 23:57:09.160562 778 log.go:172] (0xc0009d88f0) Reply frame received for 1\nI0415 23:57:09.160617 778 log.go:172] (0xc0009d88f0) (0xc0003b3220) Create stream\nI0415 23:57:09.160638 778 log.go:172] (0xc0009d88f0) (0xc0003b3220) Stream added, broadcasting: 3\nI0415 23:57:09.161584 778 log.go:172] (0xc0009d88f0) Reply frame received for 3\nI0415 23:57:09.161616 778 log.go:172] (0xc0009d88f0) (0xc000412000) Create stream\nI0415 23:57:09.161624 778 log.go:172] (0xc0009d88f0) (0xc000412000) Stream added, broadcasting: 5\nI0415 23:57:09.162522 778 log.go:172] (0xc0009d88f0) Reply frame received for 5\nI0415 23:57:09.214164 778 log.go:172] (0xc0009d88f0) Data frame received for 5\nI0415 23:57:09.214212 778 log.go:172] (0xc000412000) (5) Data frame handling\nI0415 23:57:09.214227 778 log.go:172] (0xc000412000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0415 23:57:09.214238 778 log.go:172] (0xc0009d88f0) Data frame received for 5\nI0415 23:57:09.214254 778 log.go:172] (0xc000412000) (5) Data frame handling\nI0415 23:57:09.214294 778 log.go:172] (0xc0009d88f0) Data frame received for 3\nI0415 23:57:09.214330 778 log.go:172] (0xc0003b3220) (3) Data frame handling\nI0415 23:57:09.214347 778 log.go:172] (0xc0003b3220) (3) Data frame sent\nI0415 23:57:09.214361 778 log.go:172] (0xc0009d88f0) Data frame received for 3\nI0415 23:57:09.214368 778 log.go:172] (0xc0003b3220) (3) Data frame handling\nI0415 23:57:09.218601 778 log.go:172] (0xc0009d88f0) Data frame received for 1\nI0415 23:57:09.218620 778 log.go:172] (0xc0007ca320) (1) Data frame handling\nI0415 23:57:09.218633 778 log.go:172] (0xc0007ca320) (1) Data frame sent\nI0415 23:57:09.218645 778 log.go:172] (0xc0009d88f0) (0xc0007ca320) Stream removed, broadcasting: 1\nI0415 23:57:09.218711 778 log.go:172] (0xc0009d88f0) Go away received\nI0415 23:57:09.218925 778 log.go:172] (0xc0009d88f0) (0xc0007ca320) Stream removed, broadcasting: 1\nI0415 23:57:09.218950 778 log.go:172] (0xc0009d88f0) (0xc0003b3220) Stream removed, broadcasting: 3\nI0415 23:57:09.218970 778 log.go:172] (0xc0009d88f0) (0xc000412000) Stream removed, broadcasting: 5\n" Apr 15 23:57:09.222: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 15 23:57:09.222: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 15 23:57:09.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:57:09.425: INFO: stderr: "I0415 23:57:09.346455 801 log.go:172] (0xc00003adc0) (0xc00065c280) Create stream\nI0415 23:57:09.346527 801 log.go:172] (0xc00003adc0) (0xc00065c280) Stream added, broadcasting: 1\nI0415 23:57:09.349582 801 log.go:172] (0xc00003adc0) Reply frame received for 1\nI0415 23:57:09.349641 801 log.go:172] (0xc00003adc0) (0xc00065c320) Create stream\nI0415 23:57:09.349659 801 log.go:172] (0xc00003adc0) (0xc00065c320) Stream added, broadcasting: 3\nI0415 23:57:09.350817 801 log.go:172] (0xc00003adc0) Reply frame received for 3\nI0415 23:57:09.350857 801 log.go:172] (0xc00003adc0) (0xc000706000) Create stream\nI0415 23:57:09.350868 801 log.go:172] (0xc00003adc0) (0xc000706000) Stream added, broadcasting: 5\nI0415 23:57:09.351776 801 log.go:172] (0xc00003adc0) Reply frame received for 5\nI0415 23:57:09.417912 801 log.go:172] (0xc00003adc0) Data frame received for 5\nI0415 23:57:09.417955 801 log.go:172] (0xc000706000) (5) Data frame handling\nI0415 23:57:09.417975 801 log.go:172] (0xc000706000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0415 23:57:09.418001 801 log.go:172] (0xc00003adc0) Data frame received for 3\nI0415 23:57:09.418037 801 log.go:172] (0xc00065c320) (3) Data frame handling\nI0415 23:57:09.418056 801 log.go:172] (0xc00065c320) (3) Data frame sent\nI0415 23:57:09.418070 801 log.go:172] (0xc00003adc0) Data frame received for 3\nI0415 23:57:09.418085 801 log.go:172] (0xc00065c320) (3) Data frame handling\nI0415 23:57:09.418112 801 log.go:172] (0xc00003adc0) Data frame received for 5\nI0415 23:57:09.418137 801 log.go:172] (0xc000706000) (5) Data frame handling\nI0415 23:57:09.419849 801 log.go:172] (0xc00003adc0) Data frame received for 1\nI0415 23:57:09.419876 801 log.go:172] (0xc00065c280) (1) Data frame handling\nI0415 23:57:09.419894 801 log.go:172] (0xc00065c280) (1) Data frame sent\nI0415 23:57:09.419915 801 log.go:172] (0xc00003adc0) (0xc00065c280) Stream removed, broadcasting: 1\nI0415 23:57:09.419945 801 log.go:172] (0xc00003adc0) Go away received\nI0415 23:57:09.420342 801 log.go:172] (0xc00003adc0) (0xc00065c280) Stream removed, broadcasting: 1\nI0415 23:57:09.420367 801 log.go:172] (0xc00003adc0) (0xc00065c320) Stream removed, broadcasting: 3\nI0415 23:57:09.420379 801 log.go:172] (0xc00003adc0) (0xc000706000) Stream removed, broadcasting: 5\n" Apr 15 23:57:09.425: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 15 23:57:09.425: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 15 23:57:09.429: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 15 23:57:19.434: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 15 23:57:19.434: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 15 23:57:19.434: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 15 23:57:19.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 15 23:57:19.649: INFO: stderr: "I0415 23:57:19.559220 823 log.go:172] (0xc0009f8420) (0xc000aa2280) Create stream\nI0415 23:57:19.559279 823 log.go:172] (0xc0009f8420) (0xc000aa2280) Stream added, broadcasting: 1\nI0415 23:57:19.564393 823 log.go:172] (0xc0009f8420) Reply frame received for 1\nI0415 23:57:19.564448 823 log.go:172] (0xc0009f8420) (0xc00058f5e0) Create stream\nI0415 23:57:19.564462 823 log.go:172] (0xc0009f8420) (0xc00058f5e0) Stream added, broadcasting: 3\nI0415 23:57:19.565624 823 log.go:172] (0xc0009f8420) Reply frame received for 3\nI0415 23:57:19.565659 823 log.go:172] (0xc0009f8420) (0xc000698a00) Create stream\nI0415 23:57:19.565671 823 log.go:172] (0xc0009f8420) (0xc000698a00) Stream added, broadcasting: 5\nI0415 23:57:19.566555 823 log.go:172] (0xc0009f8420) Reply frame received for 5\nI0415 23:57:19.641370 823 log.go:172] (0xc0009f8420) Data frame received for 3\nI0415 23:57:19.641398 823 log.go:172] (0xc00058f5e0) (3) Data frame handling\nI0415 23:57:19.641409 823 log.go:172] (0xc00058f5e0) (3) Data frame sent\nI0415 23:57:19.641421 823 log.go:172] (0xc0009f8420) Data frame received for 3\nI0415 23:57:19.641433 823 log.go:172] (0xc00058f5e0) (3) Data frame handling\nI0415 23:57:19.641689 823 log.go:172] (0xc0009f8420) Data frame received for 5\nI0415 23:57:19.641701 823 log.go:172] (0xc000698a00) (5) Data frame handling\nI0415 23:57:19.641709 823 log.go:172] (0xc000698a00) (5) Data frame sent\nI0415 23:57:19.641716 823 log.go:172] (0xc0009f8420) Data frame received for 5\nI0415 23:57:19.641723 823 log.go:172] (0xc000698a00) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0415 23:57:19.643630 823 log.go:172] (0xc0009f8420) Data frame received for 1\nI0415 23:57:19.643677 823 log.go:172] (0xc000aa2280) (1) Data frame handling\nI0415 23:57:19.643716 823 log.go:172] (0xc000aa2280) (1) Data frame sent\nI0415 23:57:19.643766 823 log.go:172] (0xc0009f8420) (0xc000aa2280) Stream removed, broadcasting: 1\nI0415 23:57:19.643813 823 log.go:172] (0xc0009f8420) Go away received\nI0415 23:57:19.644401 823 log.go:172] (0xc0009f8420) (0xc000aa2280) Stream removed, broadcasting: 1\nI0415 23:57:19.644427 823 log.go:172] (0xc0009f8420) (0xc00058f5e0) Stream removed, broadcasting: 3\nI0415 23:57:19.644448 823 log.go:172] (0xc0009f8420) (0xc000698a00) Stream removed, broadcasting: 5\n" Apr 15 23:57:19.649: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 15 23:57:19.649: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 15 23:57:19.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 15 23:57:19.891: INFO: stderr: "I0415 23:57:19.776036 846 log.go:172] (0xc00094c000) (0xc0004ecbe0) Create stream\nI0415 23:57:19.776105 846 log.go:172] (0xc00094c000) (0xc0004ecbe0) Stream added, broadcasting: 1\nI0415 23:57:19.786613 846 log.go:172] (0xc00094c000) Reply frame received for 1\nI0415 23:57:19.786664 846 log.go:172] (0xc00094c000) (0xc000520000) Create stream\nI0415 23:57:19.786679 846 log.go:172] (0xc00094c000) (0xc000520000) Stream added, broadcasting: 3\nI0415 23:57:19.787578 846 log.go:172] (0xc00094c000) Reply frame received for 3\nI0415 23:57:19.787632 846 log.go:172] (0xc00094c000) (0xc000520140) Create stream\nI0415 23:57:19.787645 846 log.go:172] (0xc00094c000) (0xc000520140) Stream added, broadcasting: 5\nI0415 23:57:19.788611 846 log.go:172] (0xc00094c000) Reply frame received for 5\nI0415 23:57:19.844882 846 log.go:172] (0xc00094c000) Data frame received for 5\nI0415 23:57:19.844915 846 log.go:172] (0xc000520140) (5) Data frame handling\nI0415 23:57:19.844939 846 log.go:172] (0xc000520140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0415 23:57:19.883649 846 log.go:172] (0xc00094c000) Data frame received for 5\nI0415 23:57:19.883693 846 log.go:172] (0xc000520140) (5) Data frame handling\nI0415 23:57:19.883713 846 log.go:172] (0xc00094c000) Data frame received for 3\nI0415 23:57:19.883721 846 log.go:172] (0xc000520000) (3) Data frame handling\nI0415 23:57:19.883729 846 log.go:172] (0xc000520000) (3) Data frame sent\nI0415 23:57:19.883740 846 log.go:172] (0xc00094c000) Data frame received for 3\nI0415 23:57:19.883755 846 log.go:172] (0xc000520000) (3) Data frame handling\nI0415 23:57:19.885769 846 log.go:172] (0xc00094c000) Data frame received for 1\nI0415 23:57:19.885802 846 log.go:172] (0xc0004ecbe0) (1) Data frame handling\nI0415 23:57:19.885818 846 log.go:172] (0xc0004ecbe0) (1) Data frame sent\nI0415 23:57:19.885839 846 log.go:172] (0xc00094c000) (0xc0004ecbe0) Stream removed, broadcasting: 1\nI0415 23:57:19.885859 846 log.go:172] (0xc00094c000) Go away received\nI0415 23:57:19.886416 846 log.go:172] (0xc00094c000) (0xc0004ecbe0) Stream removed, broadcasting: 1\nI0415 23:57:19.886438 846 log.go:172] (0xc00094c000) (0xc000520000) Stream removed, broadcasting: 3\nI0415 23:57:19.886450 846 log.go:172] (0xc00094c000) (0xc000520140) Stream removed, broadcasting: 5\n" Apr 15 23:57:19.891: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 15 23:57:19.891: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 15 23:57:19.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 15 23:57:20.155: INFO: stderr: "I0415 23:57:20.022932 867 log.go:172] (0xc000b24fd0) (0xc000a3c460) Create stream\nI0415 23:57:20.022989 867 log.go:172] (0xc000b24fd0) (0xc000a3c460) Stream added, broadcasting: 1\nI0415 23:57:20.027137 867 log.go:172] (0xc000b24fd0) Reply frame received for 1\nI0415 23:57:20.027168 867 log.go:172] (0xc000b24fd0) (0xc0006b77c0) Create stream\nI0415 23:57:20.027176 867 log.go:172] (0xc000b24fd0) (0xc0006b77c0) Stream added, broadcasting: 3\nI0415 23:57:20.028161 867 log.go:172] (0xc000b24fd0) Reply frame received for 3\nI0415 23:57:20.028194 867 log.go:172] (0xc000b24fd0) (0xc000526be0) Create stream\nI0415 23:57:20.028201 867 log.go:172] (0xc000b24fd0) (0xc000526be0) Stream added, broadcasting: 5\nI0415 23:57:20.029040 867 log.go:172] (0xc000b24fd0) Reply frame received for 5\nI0415 23:57:20.089783 867 log.go:172] (0xc000b24fd0) Data frame received for 5\nI0415 23:57:20.089819 867 log.go:172] (0xc000526be0) (5) Data frame handling\nI0415 23:57:20.089843 867 log.go:172] (0xc000526be0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0415 23:57:20.149811 867 log.go:172] (0xc000b24fd0) Data frame received for 3\nI0415 23:57:20.149838 867 log.go:172] (0xc0006b77c0) (3) Data frame handling\nI0415 23:57:20.149851 867 log.go:172] (0xc0006b77c0) (3) Data frame sent\nI0415 23:57:20.150070 867 log.go:172] (0xc000b24fd0) Data frame received for 3\nI0415 23:57:20.150116 867 log.go:172] (0xc0006b77c0) (3) Data frame handling\nI0415 23:57:20.150142 867 log.go:172] (0xc000b24fd0) Data frame received for 5\nI0415 23:57:20.150152 867 log.go:172] (0xc000526be0) (5) Data frame handling\nI0415 23:57:20.151374 867 log.go:172] (0xc000b24fd0) Data frame received for 1\nI0415 23:57:20.151396 867 log.go:172] (0xc000a3c460) (1) Data frame handling\nI0415 23:57:20.151415 867 log.go:172] (0xc000a3c460) (1) Data frame sent\nI0415 23:57:20.151430 867 log.go:172] (0xc000b24fd0) (0xc000a3c460) Stream removed, broadcasting: 1\nI0415 23:57:20.151501 867 log.go:172] (0xc000b24fd0) Go away received\nI0415 23:57:20.151770 867 log.go:172] (0xc000b24fd0) (0xc000a3c460) Stream removed, broadcasting: 1\nI0415 23:57:20.151788 867 log.go:172] (0xc000b24fd0) (0xc0006b77c0) Stream removed, broadcasting: 3\nI0415 23:57:20.151801 867 log.go:172] (0xc000b24fd0) (0xc000526be0) Stream removed, broadcasting: 5\n" Apr 15 23:57:20.155: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 15 23:57:20.155: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 15 23:57:20.155: INFO: Waiting for statefulset status.replicas updated to 0 Apr 15 23:57:20.178: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 15 23:57:30.187: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 15 23:57:30.187: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 15 23:57:30.187: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 15 23:57:30.202: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 23:57:30.202: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:37 +0000 UTC }] Apr 15 23:57:30.202: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC }] Apr 15 23:57:30.202: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC }] Apr 15 23:57:30.202: INFO: Apr 15 23:57:30.202: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 15 23:57:31.208: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 23:57:31.208: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:37 +0000 UTC }] Apr 15 23:57:31.208: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC }] Apr 15 23:57:31.208: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC }] Apr 15 23:57:31.208: INFO: Apr 15 23:57:31.208: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 15 23:57:32.223: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 23:57:32.223: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:37 +0000 UTC }] Apr 15 23:57:32.223: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC }] Apr 15 23:57:32.223: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC }] Apr 15 23:57:32.223: INFO: Apr 15 23:57:32.223: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 15 23:57:33.228: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 23:57:33.228: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:37 +0000 UTC }] Apr 15 23:57:33.228: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC }] Apr 15 23:57:33.228: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:58 +0000 UTC }] Apr 15 23:57:33.228: INFO: Apr 15 23:57:33.228: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 15 23:57:34.232: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 23:57:34.232: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:37 +0000 UTC }] Apr 15 23:57:34.232: INFO: Apr 15 23:57:34.232: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 15 23:57:35.237: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 23:57:35.237: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:37 +0000 UTC }] Apr 15 23:57:35.237: INFO: Apr 15 23:57:35.237: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 15 23:57:36.258: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 23:57:36.258: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:37 +0000 UTC }] Apr 15 23:57:36.258: INFO: Apr 15 23:57:36.258: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 15 23:57:37.263: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 23:57:37.263: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:37 +0000 UTC }] Apr 15 23:57:37.263: INFO: Apr 15 23:57:37.263: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 15 23:57:38.268: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 23:57:38.268: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:37 +0000 UTC }] Apr 15 23:57:38.268: INFO: Apr 15 23:57:38.268: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 15 23:57:39.272: INFO: POD NODE PHASE GRACE CONDITIONS Apr 15 23:57:39.272: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:57:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-15 23:56:37 +0000 UTC }] Apr 15 23:57:39.272: INFO: Apr 15 23:57:39.272: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7275 Apr 15 23:57:40.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:57:40.419: INFO: rc: 1 Apr 15 23:57:40.419: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Apr 15 23:57:50.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:57:50.519: INFO: rc: 1 Apr 15 23:57:50.519: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 15 23:58:00.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:58:00.617: INFO: rc: 1 Apr 15 23:58:00.617: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 15 23:58:10.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:58:10.716: INFO: rc: 1 Apr 15 23:58:10.716: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 15 23:58:20.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:58:20.824: INFO: rc: 1 Apr 15 23:58:20.824: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 15 23:58:30.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:58:30.911: INFO: rc: 1 Apr 15 23:58:30.911: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 15 23:58:40.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:58:41.010: INFO: rc: 1 Apr 15 23:58:41.010: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 15 23:58:51.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:58:51.108: INFO: rc: 1 Apr 15 23:58:51.108: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 15 23:59:01.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:59:01.206: INFO: rc: 1 Apr 15 23:59:01.206: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 15 23:59:11.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:59:11.304: INFO: rc: 1 Apr 15 23:59:11.304: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 15 23:59:21.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:59:21.403: INFO: rc: 1 Apr 15 23:59:21.403: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 15 23:59:31.404: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:59:33.834: INFO: rc: 1 Apr 15 23:59:33.834: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 15 23:59:43.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:59:43.946: INFO: rc: 1 Apr 15 23:59:43.946: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 15 23:59:53.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 15 23:59:54.037: INFO: rc: 1 Apr 15 23:59:54.038: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:00:04.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:00:04.156: INFO: rc: 1 Apr 16 00:00:04.156: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:00:14.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:00:14.248: INFO: rc: 1 Apr 16 00:00:14.248: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:00:24.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:00:24.348: INFO: rc: 1 Apr 16 00:00:24.348: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:00:34.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:00:34.435: INFO: rc: 1 Apr 16 00:00:34.435: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:00:44.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:00:44.551: INFO: rc: 1 Apr 16 00:00:44.551: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:00:54.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:00:54.643: INFO: rc: 1 Apr 16 00:00:54.643: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:01:04.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:01:04.743: INFO: rc: 1 Apr 16 00:01:04.743: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:01:14.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:01:14.831: INFO: rc: 1 Apr 16 00:01:14.831: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:01:24.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:01:24.921: INFO: rc: 1 Apr 16 00:01:24.921: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:01:34.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:01:35.029: INFO: rc: 1 Apr 16 00:01:35.030: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:01:45.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:01:45.128: INFO: rc: 1 Apr 16 00:01:45.128: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:01:55.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:01:55.228: INFO: rc: 1 Apr 16 00:01:55.228: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:02:05.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:02:05.325: INFO: rc: 1 Apr 16 00:02:05.325: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:02:15.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:02:15.422: INFO: rc: 1 Apr 16 00:02:15.422: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:02:25.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:02:25.526: INFO: rc: 1 Apr 16 00:02:25.526: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:02:35.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:02:35.630: INFO: rc: 1 Apr 16 00:02:35.630: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 16 00:02:45.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7275 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:02:45.732: INFO: rc: 1 Apr 16 00:02:45.732: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Apr 16 00:02:45.732: INFO: Scaling statefulset ss to 0 Apr 16 00:02:45.739: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 16 00:02:45.741: INFO: Deleting all statefulset in ns statefulset-7275 Apr 16 00:02:45.743: INFO: Scaling statefulset ss to 0 Apr 16 00:02:45.750: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 00:02:45.752: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:02:45.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7275" for this suite. • [SLOW TEST:367.938 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":73,"skipped":1130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:02:45.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-0c252b1b-4fc6-4375-81dc-f50f7ae5465d STEP: Creating a pod to test consume configMaps Apr 16 00:02:45.874: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b992843-d6b1-4026-a4ef-8dae008d6c84" in namespace "configmap-4166" to be "Succeeded or Failed" Apr 16 00:02:45.878: INFO: Pod "pod-configmaps-6b992843-d6b1-4026-a4ef-8dae008d6c84": Phase="Pending", Reason="", readiness=false. Elapsed: 3.576322ms Apr 16 00:02:47.881: INFO: Pod "pod-configmaps-6b992843-d6b1-4026-a4ef-8dae008d6c84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007392667s Apr 16 00:02:49.888: INFO: Pod "pod-configmaps-6b992843-d6b1-4026-a4ef-8dae008d6c84": Phase="Running", Reason="", readiness=true. Elapsed: 4.014112256s Apr 16 00:02:51.896: INFO: Pod "pod-configmaps-6b992843-d6b1-4026-a4ef-8dae008d6c84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021688352s STEP: Saw pod success Apr 16 00:02:51.896: INFO: Pod "pod-configmaps-6b992843-d6b1-4026-a4ef-8dae008d6c84" satisfied condition "Succeeded or Failed" Apr 16 00:02:51.898: INFO: Trying to get logs from node latest-worker pod pod-configmaps-6b992843-d6b1-4026-a4ef-8dae008d6c84 container configmap-volume-test: STEP: delete the pod Apr 16 00:02:51.939: INFO: Waiting for pod pod-configmaps-6b992843-d6b1-4026-a4ef-8dae008d6c84 to disappear Apr 16 00:02:51.943: INFO: Pod pod-configmaps-6b992843-d6b1-4026-a4ef-8dae008d6c84 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:02:51.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4166" for this suite. • [SLOW TEST:6.159 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1154,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:02:51.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:02:52.013: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Pending, waiting for it to be Running (with Ready = true) Apr 16 00:02:54.017: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Pending, waiting for it to be Running (with Ready = true) Apr 16 00:02:56.202: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Running (Ready = false) Apr 16 00:02:58.017: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Running (Ready = false) Apr 16 00:03:00.017: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Running (Ready = false) Apr 16 00:03:02.017: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Running (Ready = false) Apr 16 00:03:04.017: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Running (Ready = false) Apr 16 00:03:06.017: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Running (Ready = false) Apr 16 00:03:08.017: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Running (Ready = false) Apr 16 00:03:10.017: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Running (Ready = false) Apr 16 00:03:12.017: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Running (Ready = false) Apr 16 00:03:14.017: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Running (Ready = false) Apr 16 00:03:16.017: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Running (Ready = false) Apr 16 00:03:18.017: INFO: The status of Pod test-webserver-5bb47157-a6d2-48a3-9eba-a5146c4466f2 is Running (Ready = true) Apr 16 00:03:18.020: INFO: Container started at 2020-04-16 00:02:54 +0000 UTC, pod became ready at 2020-04-16 00:03:17 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:03:18.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9103" for this suite. • [SLOW TEST:26.079 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1162,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:03:18.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:03:22.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-466" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1184,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:03:22.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 16 00:03:22.210: INFO: Waiting up to 5m0s for pod "pod-67311f7c-dc8e-4d76-bd4b-bea0dde2a435" in namespace "emptydir-8317" to be "Succeeded or Failed" Apr 16 00:03:22.228: INFO: Pod "pod-67311f7c-dc8e-4d76-bd4b-bea0dde2a435": Phase="Pending", Reason="", readiness=false. Elapsed: 18.382982ms Apr 16 00:03:24.306: INFO: Pod "pod-67311f7c-dc8e-4d76-bd4b-bea0dde2a435": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096522339s Apr 16 00:03:26.310: INFO: Pod "pod-67311f7c-dc8e-4d76-bd4b-bea0dde2a435": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100049794s STEP: Saw pod success Apr 16 00:03:26.310: INFO: Pod "pod-67311f7c-dc8e-4d76-bd4b-bea0dde2a435" satisfied condition "Succeeded or Failed" Apr 16 00:03:26.312: INFO: Trying to get logs from node latest-worker pod pod-67311f7c-dc8e-4d76-bd4b-bea0dde2a435 container test-container: STEP: delete the pod Apr 16 00:03:26.358: INFO: Waiting for pod pod-67311f7c-dc8e-4d76-bd4b-bea0dde2a435 to disappear Apr 16 00:03:26.363: INFO: Pod pod-67311f7c-dc8e-4d76-bd4b-bea0dde2a435 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:03:26.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8317" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1230,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:03:26.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:03:30.576: INFO: Waiting up to 5m0s for pod "client-envvars-4442cbd7-91aa-40ca-8df9-2e29d5ce8eaa" in namespace "pods-5430" to be "Succeeded or Failed" Apr 16 00:03:30.579: INFO: Pod "client-envvars-4442cbd7-91aa-40ca-8df9-2e29d5ce8eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.224175ms Apr 16 00:03:32.583: INFO: Pod "client-envvars-4442cbd7-91aa-40ca-8df9-2e29d5ce8eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006814222s Apr 16 00:03:34.587: INFO: Pod "client-envvars-4442cbd7-91aa-40ca-8df9-2e29d5ce8eaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011518706s STEP: Saw pod success Apr 16 00:03:34.587: INFO: Pod "client-envvars-4442cbd7-91aa-40ca-8df9-2e29d5ce8eaa" satisfied condition "Succeeded or Failed" Apr 16 00:03:34.590: INFO: Trying to get logs from node latest-worker2 pod client-envvars-4442cbd7-91aa-40ca-8df9-2e29d5ce8eaa container env3cont: STEP: delete the pod Apr 16 00:03:34.633: INFO: Waiting for pod client-envvars-4442cbd7-91aa-40ca-8df9-2e29d5ce8eaa to disappear Apr 16 00:03:34.659: INFO: Pod client-envvars-4442cbd7-91aa-40ca-8df9-2e29d5ce8eaa no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:03:34.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5430" for this suite. • [SLOW TEST:8.296 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1244,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:03:34.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 00:03:35.179: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 00:03:37.190: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592215, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592215, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592215, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592215, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 00:03:40.218: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:03:40.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4886" for this suite. STEP: Destroying namespace "webhook-4886-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.046 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":79,"skipped":1257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:03:40.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0416 00:03:41.839363 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 16 00:03:41.839: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:03:41.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7974" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":80,"skipped":1297,"failed":0} SSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:03:41.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:03:42.187: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-483 I0416 00:03:42.282309 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-483, replica count: 1 I0416 00:03:43.332750 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 00:03:44.332969 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 00:03:45.333313 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 00:03:46.333535 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 00:03:46.449: INFO: Created: latency-svc-s2cwx Apr 16 00:03:46.456: INFO: Got endpoints: latency-svc-s2cwx [22.2745ms] Apr 16 00:03:46.534: INFO: Created: latency-svc-9bfls Apr 16 00:03:46.538: INFO: Got endpoints: latency-svc-9bfls [82.682657ms] Apr 16 00:03:46.557: INFO: Created: latency-svc-pjkmp Apr 16 00:03:46.568: INFO: Got endpoints: latency-svc-pjkmp [112.038969ms] Apr 16 00:03:46.587: INFO: Created: latency-svc-jpnh6 Apr 16 00:03:46.599: INFO: Got endpoints: latency-svc-jpnh6 [142.797136ms] Apr 16 00:03:46.620: INFO: Created: latency-svc-sqbqj Apr 16 00:03:46.653: INFO: Got endpoints: latency-svc-sqbqj [197.108826ms] Apr 16 00:03:46.662: INFO: Created: latency-svc-sd7gs Apr 16 00:03:46.676: INFO: Got endpoints: latency-svc-sd7gs [220.561443ms] Apr 16 00:03:46.692: INFO: Created: latency-svc-pphqt Apr 16 00:03:46.707: INFO: Got endpoints: latency-svc-pphqt [251.129862ms] Apr 16 00:03:46.725: INFO: Created: latency-svc-748qq Apr 16 00:03:46.736: INFO: Got endpoints: latency-svc-748qq [280.170851ms] Apr 16 00:03:46.791: INFO: Created: latency-svc-rwnnk Apr 16 00:03:46.809: INFO: Created: latency-svc-ln7bs Apr 16 00:03:46.809: INFO: Got endpoints: latency-svc-rwnnk [352.81112ms] Apr 16 00:03:46.819: INFO: Got endpoints: latency-svc-ln7bs [363.47296ms] Apr 16 00:03:46.833: INFO: Created: latency-svc-b5sk4 Apr 16 00:03:46.849: INFO: Got endpoints: latency-svc-b5sk4 [393.211658ms] Apr 16 00:03:46.867: INFO: Created: latency-svc-4drjp Apr 16 00:03:46.879: INFO: Got endpoints: latency-svc-4drjp [423.575132ms] Apr 16 00:03:46.959: INFO: Created: latency-svc-ljtrw Apr 16 00:03:46.994: INFO: Got endpoints: latency-svc-ljtrw [537.91215ms] Apr 16 00:03:47.060: INFO: Created: latency-svc-vcr7d Apr 16 00:03:47.071: INFO: Got endpoints: latency-svc-vcr7d [614.848787ms] Apr 16 00:03:47.131: INFO: Created: latency-svc-hqbmt Apr 16 00:03:47.157: INFO: Got endpoints: latency-svc-hqbmt [700.95041ms] Apr 16 00:03:47.235: INFO: Created: latency-svc-xnj79 Apr 16 00:03:47.269: INFO: Created: latency-svc-tq5q4 Apr 16 00:03:47.269: INFO: Got endpoints: latency-svc-xnj79 [813.006804ms] Apr 16 00:03:47.294: INFO: Got endpoints: latency-svc-tq5q4 [755.245435ms] Apr 16 00:03:47.324: INFO: Created: latency-svc-78mjh Apr 16 00:03:47.360: INFO: Got endpoints: latency-svc-78mjh [791.337972ms] Apr 16 00:03:47.431: INFO: Created: latency-svc-8jbbq Apr 16 00:03:47.492: INFO: Got endpoints: latency-svc-8jbbq [892.839834ms] Apr 16 00:03:47.496: INFO: Created: latency-svc-qkwtp Apr 16 00:03:47.509: INFO: Got endpoints: latency-svc-qkwtp [855.55983ms] Apr 16 00:03:47.541: INFO: Created: latency-svc-4jv6c Apr 16 00:03:47.563: INFO: Got endpoints: latency-svc-4jv6c [886.47259ms] Apr 16 00:03:47.580: INFO: Created: latency-svc-gfbtf Apr 16 00:03:47.630: INFO: Got endpoints: latency-svc-gfbtf [923.048972ms] Apr 16 00:03:47.633: INFO: Created: latency-svc-vp7r4 Apr 16 00:03:47.655: INFO: Created: latency-svc-bqzsx Apr 16 00:03:47.655: INFO: Got endpoints: latency-svc-vp7r4 [918.607076ms] Apr 16 00:03:47.670: INFO: Got endpoints: latency-svc-bqzsx [860.827638ms] Apr 16 00:03:47.710: INFO: Created: latency-svc-bw5sk Apr 16 00:03:47.756: INFO: Got endpoints: latency-svc-bw5sk [936.170642ms] Apr 16 00:03:47.778: INFO: Created: latency-svc-w7kcm Apr 16 00:03:47.941: INFO: Got endpoints: latency-svc-w7kcm [1.091519373s] Apr 16 00:03:47.960: INFO: Created: latency-svc-7hf4l Apr 16 00:03:47.970: INFO: Got endpoints: latency-svc-7hf4l [1.090388048s] Apr 16 00:03:47.992: INFO: Created: latency-svc-c7mw7 Apr 16 00:03:48.006: INFO: Got endpoints: latency-svc-c7mw7 [1.012157485s] Apr 16 00:03:48.079: INFO: Created: latency-svc-4ltkr Apr 16 00:03:48.083: INFO: Got endpoints: latency-svc-4ltkr [1.012309874s] Apr 16 00:03:48.102: INFO: Created: latency-svc-5kqd2 Apr 16 00:03:48.113: INFO: Got endpoints: latency-svc-5kqd2 [956.557799ms] Apr 16 00:03:48.133: INFO: Created: latency-svc-z79mg Apr 16 00:03:48.156: INFO: Got endpoints: latency-svc-z79mg [887.409566ms] Apr 16 00:03:48.217: INFO: Created: latency-svc-9pwkz Apr 16 00:03:48.243: INFO: Created: latency-svc-z4vnr Apr 16 00:03:48.244: INFO: Got endpoints: latency-svc-9pwkz [950.201943ms] Apr 16 00:03:48.258: INFO: Got endpoints: latency-svc-z4vnr [897.955773ms] Apr 16 00:03:48.284: INFO: Created: latency-svc-x9dtl Apr 16 00:03:48.300: INFO: Got endpoints: latency-svc-x9dtl [808.158115ms] Apr 16 00:03:48.360: INFO: Created: latency-svc-v4fb6 Apr 16 00:03:48.378: INFO: Created: latency-svc-g6wzx Apr 16 00:03:48.378: INFO: Got endpoints: latency-svc-v4fb6 [869.574179ms] Apr 16 00:03:48.389: INFO: Got endpoints: latency-svc-g6wzx [826.285546ms] Apr 16 00:03:48.428: INFO: Created: latency-svc-lcd6h Apr 16 00:03:48.444: INFO: Got endpoints: latency-svc-lcd6h [813.592737ms] Apr 16 00:03:48.499: INFO: Created: latency-svc-6cnbq Apr 16 00:03:48.525: INFO: Created: latency-svc-czq2b Apr 16 00:03:48.525: INFO: Got endpoints: latency-svc-6cnbq [869.781672ms] Apr 16 00:03:48.538: INFO: Got endpoints: latency-svc-czq2b [868.71269ms] Apr 16 00:03:48.565: INFO: Created: latency-svc-29hxl Apr 16 00:03:48.581: INFO: Got endpoints: latency-svc-29hxl [824.900439ms] Apr 16 00:03:48.635: INFO: Created: latency-svc-d669t Apr 16 00:03:48.656: INFO: Created: latency-svc-2vcbd Apr 16 00:03:48.655: INFO: Got endpoints: latency-svc-d669t [714.676467ms] Apr 16 00:03:48.671: INFO: Got endpoints: latency-svc-2vcbd [700.931839ms] Apr 16 00:03:48.699: INFO: Created: latency-svc-62nn7 Apr 16 00:03:48.713: INFO: Got endpoints: latency-svc-62nn7 [706.815906ms] Apr 16 00:03:48.728: INFO: Created: latency-svc-85f54 Apr 16 00:03:48.785: INFO: Got endpoints: latency-svc-85f54 [701.974365ms] Apr 16 00:03:48.788: INFO: Created: latency-svc-znjs4 Apr 16 00:03:48.797: INFO: Got endpoints: latency-svc-znjs4 [683.300946ms] Apr 16 00:03:48.834: INFO: Created: latency-svc-q4k8r Apr 16 00:03:48.851: INFO: Got endpoints: latency-svc-q4k8r [694.351894ms] Apr 16 00:03:48.929: INFO: Created: latency-svc-bxv5l Apr 16 00:03:48.934: INFO: Got endpoints: latency-svc-bxv5l [690.433546ms] Apr 16 00:03:48.975: INFO: Created: latency-svc-q4s5d Apr 16 00:03:48.989: INFO: Got endpoints: latency-svc-q4s5d [731.547415ms] Apr 16 00:03:49.004: INFO: Created: latency-svc-mvdxp Apr 16 00:03:49.019: INFO: Got endpoints: latency-svc-mvdxp [718.786278ms] Apr 16 00:03:49.066: INFO: Created: latency-svc-q7mcz Apr 16 00:03:49.099: INFO: Created: latency-svc-vxlsf Apr 16 00:03:49.099: INFO: Got endpoints: latency-svc-q7mcz [720.613057ms] Apr 16 00:03:49.127: INFO: Got endpoints: latency-svc-vxlsf [737.330632ms] Apr 16 00:03:49.154: INFO: Created: latency-svc-hjmgs Apr 16 00:03:49.186: INFO: Got endpoints: latency-svc-hjmgs [742.337894ms] Apr 16 00:03:49.208: INFO: Created: latency-svc-hhk6t Apr 16 00:03:49.222: INFO: Got endpoints: latency-svc-hhk6t [697.023501ms] Apr 16 00:03:49.244: INFO: Created: latency-svc-n5pnx Apr 16 00:03:49.258: INFO: Got endpoints: latency-svc-n5pnx [719.247691ms] Apr 16 00:03:49.278: INFO: Created: latency-svc-6dlkt Apr 16 00:03:49.312: INFO: Got endpoints: latency-svc-6dlkt [731.260154ms] Apr 16 00:03:49.326: INFO: Created: latency-svc-dzwht Apr 16 00:03:49.342: INFO: Got endpoints: latency-svc-dzwht [686.341554ms] Apr 16 00:03:49.370: INFO: Created: latency-svc-89mgf Apr 16 00:03:49.384: INFO: Got endpoints: latency-svc-89mgf [713.031437ms] Apr 16 00:03:49.401: INFO: Created: latency-svc-7qx95 Apr 16 00:03:49.426: INFO: Got endpoints: latency-svc-7qx95 [712.983756ms] Apr 16 00:03:49.446: INFO: Created: latency-svc-wq57w Apr 16 00:03:49.488: INFO: Got endpoints: latency-svc-wq57w [702.959037ms] Apr 16 00:03:49.518: INFO: Created: latency-svc-4qnxs Apr 16 00:03:49.549: INFO: Got endpoints: latency-svc-4qnxs [751.680029ms] Apr 16 00:03:49.568: INFO: Created: latency-svc-s8txg Apr 16 00:03:49.592: INFO: Got endpoints: latency-svc-s8txg [741.688009ms] Apr 16 00:03:49.630: INFO: Created: latency-svc-k8jbc Apr 16 00:03:49.708: INFO: Created: latency-svc-hg7nz Apr 16 00:03:49.708: INFO: Got endpoints: latency-svc-k8jbc [774.076548ms] Apr 16 00:03:49.713: INFO: Got endpoints: latency-svc-hg7nz [724.222636ms] Apr 16 00:03:49.728: INFO: Created: latency-svc-97jlk Apr 16 00:03:49.737: INFO: Got endpoints: latency-svc-97jlk [718.635062ms] Apr 16 00:03:49.755: INFO: Created: latency-svc-75p6q Apr 16 00:03:49.768: INFO: Got endpoints: latency-svc-75p6q [668.517343ms] Apr 16 00:03:49.784: INFO: Created: latency-svc-6pvrd Apr 16 00:03:49.797: INFO: Got endpoints: latency-svc-6pvrd [670.395535ms] Apr 16 00:03:49.843: INFO: Created: latency-svc-7rthl Apr 16 00:03:49.860: INFO: Created: latency-svc-5k4fk Apr 16 00:03:49.860: INFO: Got endpoints: latency-svc-7rthl [673.706161ms] Apr 16 00:03:49.890: INFO: Got endpoints: latency-svc-5k4fk [668.193433ms] Apr 16 00:03:49.920: INFO: Created: latency-svc-2qrjv Apr 16 00:03:49.935: INFO: Got endpoints: latency-svc-2qrjv [677.178402ms] Apr 16 00:03:49.971: INFO: Created: latency-svc-9mp4p Apr 16 00:03:49.988: INFO: Created: latency-svc-k8d7b Apr 16 00:03:49.988: INFO: Got endpoints: latency-svc-9mp4p [676.519363ms] Apr 16 00:03:50.024: INFO: Got endpoints: latency-svc-k8d7b [682.459234ms] Apr 16 00:03:50.059: INFO: Created: latency-svc-89t2j Apr 16 00:03:50.120: INFO: Got endpoints: latency-svc-89t2j [736.814527ms] Apr 16 00:03:50.122: INFO: Created: latency-svc-cmwjc Apr 16 00:03:50.126: INFO: Got endpoints: latency-svc-cmwjc [700.495186ms] Apr 16 00:03:50.144: INFO: Created: latency-svc-pf9lb Apr 16 00:03:50.157: INFO: Got endpoints: latency-svc-pf9lb [668.355089ms] Apr 16 00:03:50.175: INFO: Created: latency-svc-snwff Apr 16 00:03:50.187: INFO: Got endpoints: latency-svc-snwff [638.10877ms] Apr 16 00:03:50.204: INFO: Created: latency-svc-hfzgs Apr 16 00:03:50.217: INFO: Got endpoints: latency-svc-hfzgs [624.528477ms] Apr 16 00:03:50.264: INFO: Created: latency-svc-77vhk Apr 16 00:03:50.270: INFO: Got endpoints: latency-svc-77vhk [561.766171ms] Apr 16 00:03:50.292: INFO: Created: latency-svc-pnm2n Apr 16 00:03:50.313: INFO: Got endpoints: latency-svc-pnm2n [599.623761ms] Apr 16 00:03:50.334: INFO: Created: latency-svc-dkt56 Apr 16 00:03:50.349: INFO: Got endpoints: latency-svc-dkt56 [611.867324ms] Apr 16 00:03:50.396: INFO: Created: latency-svc-ntrkx Apr 16 00:03:50.415: INFO: Got endpoints: latency-svc-ntrkx [647.149521ms] Apr 16 00:03:50.417: INFO: Created: latency-svc-9h86b Apr 16 00:03:50.426: INFO: Got endpoints: latency-svc-9h86b [629.459233ms] Apr 16 00:03:50.445: INFO: Created: latency-svc-68cbm Apr 16 00:03:50.456: INFO: Got endpoints: latency-svc-68cbm [596.270302ms] Apr 16 00:03:50.472: INFO: Created: latency-svc-52w2v Apr 16 00:03:50.486: INFO: Got endpoints: latency-svc-52w2v [596.303634ms] Apr 16 00:03:50.528: INFO: Created: latency-svc-6v4w5 Apr 16 00:03:50.544: INFO: Got endpoints: latency-svc-6v4w5 [609.413858ms] Apr 16 00:03:50.545: INFO: Created: latency-svc-lt99b Apr 16 00:03:50.570: INFO: Got endpoints: latency-svc-lt99b [581.553587ms] Apr 16 00:03:50.595: INFO: Created: latency-svc-rk27q Apr 16 00:03:50.609: INFO: Got endpoints: latency-svc-rk27q [584.468188ms] Apr 16 00:03:50.624: INFO: Created: latency-svc-kl7jl Apr 16 00:03:50.659: INFO: Got endpoints: latency-svc-kl7jl [538.407416ms] Apr 16 00:03:50.681: INFO: Created: latency-svc-tn7mm Apr 16 00:03:50.690: INFO: Got endpoints: latency-svc-tn7mm [563.597932ms] Apr 16 00:03:50.712: INFO: Created: latency-svc-2fkhq Apr 16 00:03:50.720: INFO: Got endpoints: latency-svc-2fkhq [563.451736ms] Apr 16 00:03:50.736: INFO: Created: latency-svc-fp575 Apr 16 00:03:50.744: INFO: Got endpoints: latency-svc-fp575 [556.924179ms] Apr 16 00:03:50.791: INFO: Created: latency-svc-wz74j Apr 16 00:03:50.810: INFO: Got endpoints: latency-svc-wz74j [592.941973ms] Apr 16 00:03:50.810: INFO: Created: latency-svc-lxhnv Apr 16 00:03:50.822: INFO: Got endpoints: latency-svc-lxhnv [551.789101ms] Apr 16 00:03:50.840: INFO: Created: latency-svc-92fjw Apr 16 00:03:50.852: INFO: Got endpoints: latency-svc-92fjw [538.761072ms] Apr 16 00:03:50.876: INFO: Created: latency-svc-f9nqr Apr 16 00:03:50.888: INFO: Got endpoints: latency-svc-f9nqr [538.356593ms] Apr 16 00:03:50.922: INFO: Created: latency-svc-mnt7q Apr 16 00:03:50.935: INFO: Got endpoints: latency-svc-mnt7q [520.364818ms] Apr 16 00:03:50.952: INFO: Created: latency-svc-gzvhk Apr 16 00:03:50.966: INFO: Got endpoints: latency-svc-gzvhk [539.294691ms] Apr 16 00:03:50.982: INFO: Created: latency-svc-fzhpf Apr 16 00:03:50.995: INFO: Got endpoints: latency-svc-fzhpf [539.173599ms] Apr 16 00:03:51.015: INFO: Created: latency-svc-qt444 Apr 16 00:03:51.042: INFO: Got endpoints: latency-svc-qt444 [555.940184ms] Apr 16 00:03:51.056: INFO: Created: latency-svc-b2lh9 Apr 16 00:03:51.067: INFO: Got endpoints: latency-svc-b2lh9 [522.875457ms] Apr 16 00:03:51.080: INFO: Created: latency-svc-hpfgc Apr 16 00:03:51.091: INFO: Got endpoints: latency-svc-hpfgc [521.245845ms] Apr 16 00:03:51.108: INFO: Created: latency-svc-tgvbf Apr 16 00:03:51.122: INFO: Got endpoints: latency-svc-tgvbf [512.915297ms] Apr 16 00:03:51.174: INFO: Created: latency-svc-qk7bt Apr 16 00:03:51.200: INFO: Got endpoints: latency-svc-qk7bt [541.190582ms] Apr 16 00:03:51.201: INFO: Created: latency-svc-dfcjl Apr 16 00:03:51.220: INFO: Got endpoints: latency-svc-dfcjl [529.632141ms] Apr 16 00:03:51.237: INFO: Created: latency-svc-7jwbn Apr 16 00:03:51.247: INFO: Got endpoints: latency-svc-7jwbn [526.799009ms] Apr 16 00:03:51.266: INFO: Created: latency-svc-cxpkq Apr 16 00:03:51.306: INFO: Got endpoints: latency-svc-cxpkq [562.341428ms] Apr 16 00:03:51.318: INFO: Created: latency-svc-v5gg7 Apr 16 00:03:51.332: INFO: Got endpoints: latency-svc-v5gg7 [521.545887ms] Apr 16 00:03:51.360: INFO: Created: latency-svc-7vpcc Apr 16 00:03:51.374: INFO: Got endpoints: latency-svc-7vpcc [551.469889ms] Apr 16 00:03:51.391: INFO: Created: latency-svc-7g74d Apr 16 00:03:51.403: INFO: Got endpoints: latency-svc-7g74d [550.821695ms] Apr 16 00:03:51.450: INFO: Created: latency-svc-cl2k6 Apr 16 00:03:51.463: INFO: Got endpoints: latency-svc-cl2k6 [575.446715ms] Apr 16 00:03:51.488: INFO: Created: latency-svc-tn67g Apr 16 00:03:51.511: INFO: Got endpoints: latency-svc-tn67g [575.871388ms] Apr 16 00:03:51.546: INFO: Created: latency-svc-8kpf4 Apr 16 00:03:51.575: INFO: Got endpoints: latency-svc-8kpf4 [609.583775ms] Apr 16 00:03:51.582: INFO: Created: latency-svc-zv9xx Apr 16 00:03:51.595: INFO: Got endpoints: latency-svc-zv9xx [599.427544ms] Apr 16 00:03:51.619: INFO: Created: latency-svc-5s547 Apr 16 00:03:51.647: INFO: Got endpoints: latency-svc-5s547 [604.551973ms] Apr 16 00:03:51.663: INFO: Created: latency-svc-nn2rx Apr 16 00:03:51.673: INFO: Got endpoints: latency-svc-nn2rx [605.428724ms] Apr 16 00:03:51.735: INFO: Created: latency-svc-sllxz Apr 16 00:03:51.762: INFO: Got endpoints: latency-svc-sllxz [670.51697ms] Apr 16 00:03:51.786: INFO: Created: latency-svc-xgtl9 Apr 16 00:03:51.799: INFO: Got endpoints: latency-svc-xgtl9 [677.481801ms] Apr 16 00:03:51.839: INFO: Created: latency-svc-24rxs Apr 16 00:03:51.846: INFO: Got endpoints: latency-svc-24rxs [646.141415ms] Apr 16 00:03:51.867: INFO: Created: latency-svc-pj9nl Apr 16 00:03:51.896: INFO: Got endpoints: latency-svc-pj9nl [676.663675ms] Apr 16 00:03:51.927: INFO: Created: latency-svc-p9k9g Apr 16 00:03:51.965: INFO: Got endpoints: latency-svc-p9k9g [717.550616ms] Apr 16 00:03:51.978: INFO: Created: latency-svc-z42m7 Apr 16 00:03:51.991: INFO: Got endpoints: latency-svc-z42m7 [685.138981ms] Apr 16 00:03:52.026: INFO: Created: latency-svc-85g2d Apr 16 00:03:52.050: INFO: Got endpoints: latency-svc-85g2d [718.431194ms] Apr 16 00:03:52.119: INFO: Created: latency-svc-pf469 Apr 16 00:03:52.134: INFO: Got endpoints: latency-svc-pf469 [760.504567ms] Apr 16 00:03:52.148: INFO: Created: latency-svc-4vnz5 Apr 16 00:03:52.158: INFO: Got endpoints: latency-svc-4vnz5 [755.112434ms] Apr 16 00:03:52.170: INFO: Created: latency-svc-6qg6m Apr 16 00:03:52.182: INFO: Got endpoints: latency-svc-6qg6m [718.764491ms] Apr 16 00:03:52.200: INFO: Created: latency-svc-kxmg6 Apr 16 00:03:52.228: INFO: Got endpoints: latency-svc-kxmg6 [717.15342ms] Apr 16 00:03:52.242: INFO: Created: latency-svc-pk5jq Apr 16 00:03:52.254: INFO: Got endpoints: latency-svc-pk5jq [678.37931ms] Apr 16 00:03:52.274: INFO: Created: latency-svc-2q78g Apr 16 00:03:52.290: INFO: Got endpoints: latency-svc-2q78g [695.356086ms] Apr 16 00:03:52.316: INFO: Created: latency-svc-mngmx Apr 16 00:03:52.348: INFO: Got endpoints: latency-svc-mngmx [700.726228ms] Apr 16 00:03:52.370: INFO: Created: latency-svc-fd6pq Apr 16 00:03:52.386: INFO: Got endpoints: latency-svc-fd6pq [712.774641ms] Apr 16 00:03:52.404: INFO: Created: latency-svc-l66hb Apr 16 00:03:52.416: INFO: Got endpoints: latency-svc-l66hb [654.410548ms] Apr 16 00:03:52.504: INFO: Created: latency-svc-wbs4z Apr 16 00:03:52.532: INFO: Got endpoints: latency-svc-wbs4z [732.421932ms] Apr 16 00:03:52.532: INFO: Created: latency-svc-vbtt7 Apr 16 00:03:52.548: INFO: Got endpoints: latency-svc-vbtt7 [701.11253ms] Apr 16 00:03:52.568: INFO: Created: latency-svc-m64tl Apr 16 00:03:52.584: INFO: Got endpoints: latency-svc-m64tl [687.739089ms] Apr 16 00:03:52.635: INFO: Created: latency-svc-sr8ws Apr 16 00:03:52.655: INFO: Got endpoints: latency-svc-sr8ws [690.730782ms] Apr 16 00:03:52.656: INFO: Created: latency-svc-np5nf Apr 16 00:03:52.667: INFO: Got endpoints: latency-svc-np5nf [675.724784ms] Apr 16 00:03:52.686: INFO: Created: latency-svc-nxjzj Apr 16 00:03:52.697: INFO: Got endpoints: latency-svc-nxjzj [647.037233ms] Apr 16 00:03:52.716: INFO: Created: latency-svc-hq6fz Apr 16 00:03:52.728: INFO: Got endpoints: latency-svc-hq6fz [593.517813ms] Apr 16 00:03:52.761: INFO: Created: latency-svc-z69ls Apr 16 00:03:52.784: INFO: Created: latency-svc-kwp2t Apr 16 00:03:52.784: INFO: Got endpoints: latency-svc-z69ls [626.51821ms] Apr 16 00:03:52.803: INFO: Got endpoints: latency-svc-kwp2t [620.949555ms] Apr 16 00:03:52.818: INFO: Created: latency-svc-xzx72 Apr 16 00:03:52.842: INFO: Got endpoints: latency-svc-xzx72 [613.37017ms] Apr 16 00:03:52.906: INFO: Created: latency-svc-cf5xj Apr 16 00:03:52.934: INFO: Got endpoints: latency-svc-cf5xj [680.245634ms] Apr 16 00:03:52.935: INFO: Created: latency-svc-vcbxt Apr 16 00:03:52.943: INFO: Got endpoints: latency-svc-vcbxt [652.479446ms] Apr 16 00:03:52.970: INFO: Created: latency-svc-mmkmx Apr 16 00:03:53.001: INFO: Got endpoints: latency-svc-mmkmx [652.653585ms] Apr 16 00:03:53.055: INFO: Created: latency-svc-zfpfz Apr 16 00:03:53.063: INFO: Got endpoints: latency-svc-zfpfz [677.15649ms] Apr 16 00:03:53.075: INFO: Created: latency-svc-vffjq Apr 16 00:03:53.087: INFO: Got endpoints: latency-svc-vffjq [670.73529ms] Apr 16 00:03:53.100: INFO: Created: latency-svc-bbvxk Apr 16 00:03:53.126: INFO: Got endpoints: latency-svc-bbvxk [594.457935ms] Apr 16 00:03:53.192: INFO: Created: latency-svc-5n2zx Apr 16 00:03:53.214: INFO: Got endpoints: latency-svc-5n2zx [666.292168ms] Apr 16 00:03:53.215: INFO: Created: latency-svc-5p58z Apr 16 00:03:53.225: INFO: Got endpoints: latency-svc-5p58z [640.975887ms] Apr 16 00:03:53.244: INFO: Created: latency-svc-g5njv Apr 16 00:03:53.280: INFO: Got endpoints: latency-svc-g5njv [624.425683ms] Apr 16 00:03:53.330: INFO: Created: latency-svc-29f4x Apr 16 00:03:53.354: INFO: Created: latency-svc-9hljf Apr 16 00:03:53.355: INFO: Got endpoints: latency-svc-29f4x [687.895393ms] Apr 16 00:03:53.368: INFO: Got endpoints: latency-svc-9hljf [671.071089ms] Apr 16 00:03:53.406: INFO: Created: latency-svc-wc5zq Apr 16 00:03:53.448: INFO: Got endpoints: latency-svc-wc5zq [720.694185ms] Apr 16 00:03:53.478: INFO: Created: latency-svc-47kfz Apr 16 00:03:53.494: INFO: Got endpoints: latency-svc-47kfz [709.905111ms] Apr 16 00:03:53.522: INFO: Created: latency-svc-zs9f9 Apr 16 00:03:53.536: INFO: Got endpoints: latency-svc-zs9f9 [733.32745ms] Apr 16 00:03:53.581: INFO: Created: latency-svc-9jsg2 Apr 16 00:03:53.598: INFO: Got endpoints: latency-svc-9jsg2 [756.077548ms] Apr 16 00:03:53.628: INFO: Created: latency-svc-tfw7f Apr 16 00:03:53.638: INFO: Got endpoints: latency-svc-tfw7f [704.224567ms] Apr 16 00:03:53.664: INFO: Created: latency-svc-pn549 Apr 16 00:03:53.719: INFO: Got endpoints: latency-svc-pn549 [776.342612ms] Apr 16 00:03:53.738: INFO: Created: latency-svc-2l92w Apr 16 00:03:53.762: INFO: Got endpoints: latency-svc-2l92w [761.377945ms] Apr 16 00:03:53.792: INFO: Created: latency-svc-ks9n4 Apr 16 00:03:53.806: INFO: Got endpoints: latency-svc-ks9n4 [743.190417ms] Apr 16 00:03:53.851: INFO: Created: latency-svc-dpf85 Apr 16 00:03:53.855: INFO: Got endpoints: latency-svc-dpf85 [768.058056ms] Apr 16 00:03:53.886: INFO: Created: latency-svc-8pfwp Apr 16 00:03:53.896: INFO: Got endpoints: latency-svc-8pfwp [769.520762ms] Apr 16 00:03:53.916: INFO: Created: latency-svc-gppts Apr 16 00:03:53.932: INFO: Got endpoints: latency-svc-gppts [717.552616ms] Apr 16 00:03:53.948: INFO: Created: latency-svc-w7zc6 Apr 16 00:03:53.977: INFO: Got endpoints: latency-svc-w7zc6 [751.506507ms] Apr 16 00:03:53.990: INFO: Created: latency-svc-xxh7d Apr 16 00:03:54.008: INFO: Got endpoints: latency-svc-xxh7d [727.849535ms] Apr 16 00:03:54.054: INFO: Created: latency-svc-mcl2g Apr 16 00:03:54.070: INFO: Got endpoints: latency-svc-mcl2g [714.422858ms] Apr 16 00:03:54.144: INFO: Created: latency-svc-fm4nb Apr 16 00:03:54.158: INFO: Got endpoints: latency-svc-fm4nb [789.709531ms] Apr 16 00:03:54.158: INFO: Created: latency-svc-h6fsq Apr 16 00:03:54.171: INFO: Got endpoints: latency-svc-h6fsq [722.4076ms] Apr 16 00:03:54.201: INFO: Created: latency-svc-6s486 Apr 16 00:03:54.213: INFO: Got endpoints: latency-svc-6s486 [718.746659ms] Apr 16 00:03:54.230: INFO: Created: latency-svc-5d6fb Apr 16 00:03:54.243: INFO: Got endpoints: latency-svc-5d6fb [706.572518ms] Apr 16 00:03:54.300: INFO: Created: latency-svc-c7hqx Apr 16 00:03:54.309: INFO: Got endpoints: latency-svc-c7hqx [711.268905ms] Apr 16 00:03:54.324: INFO: Created: latency-svc-phrql Apr 16 00:03:54.333: INFO: Got endpoints: latency-svc-phrql [694.895031ms] Apr 16 00:03:54.366: INFO: Created: latency-svc-97znq Apr 16 00:03:54.375: INFO: Got endpoints: latency-svc-97znq [656.320937ms] Apr 16 00:03:54.398: INFO: Created: latency-svc-prjcj Apr 16 00:03:54.467: INFO: Got endpoints: latency-svc-prjcj [705.365418ms] Apr 16 00:03:54.469: INFO: Created: latency-svc-n2ln4 Apr 16 00:03:54.477: INFO: Got endpoints: latency-svc-n2ln4 [671.066116ms] Apr 16 00:03:54.503: INFO: Created: latency-svc-c28q4 Apr 16 00:03:54.525: INFO: Got endpoints: latency-svc-c28q4 [670.138197ms] Apr 16 00:03:54.552: INFO: Created: latency-svc-9chl8 Apr 16 00:03:54.566: INFO: Got endpoints: latency-svc-9chl8 [670.290092ms] Apr 16 00:03:54.602: INFO: Created: latency-svc-qps6j Apr 16 00:03:54.626: INFO: Got endpoints: latency-svc-qps6j [694.611286ms] Apr 16 00:03:54.650: INFO: Created: latency-svc-hzg82 Apr 16 00:03:54.663: INFO: Got endpoints: latency-svc-hzg82 [685.933767ms] Apr 16 00:03:54.680: INFO: Created: latency-svc-lp9mv Apr 16 00:03:54.692: INFO: Got endpoints: latency-svc-lp9mv [684.171402ms] Apr 16 00:03:54.737: INFO: Created: latency-svc-wlmdh Apr 16 00:03:54.752: INFO: Got endpoints: latency-svc-wlmdh [682.334168ms] Apr 16 00:03:54.776: INFO: Created: latency-svc-hsl5f Apr 16 00:03:54.788: INFO: Got endpoints: latency-svc-hsl5f [629.962057ms] Apr 16 00:03:54.806: INFO: Created: latency-svc-2hjmf Apr 16 00:03:54.819: INFO: Got endpoints: latency-svc-2hjmf [647.70513ms] Apr 16 00:03:54.899: INFO: Created: latency-svc-qwt2b Apr 16 00:03:54.920: INFO: Got endpoints: latency-svc-qwt2b [707.083986ms] Apr 16 00:03:54.921: INFO: Created: latency-svc-h5dfd Apr 16 00:03:54.933: INFO: Got endpoints: latency-svc-h5dfd [689.77651ms] Apr 16 00:03:54.950: INFO: Created: latency-svc-cchpx Apr 16 00:03:54.968: INFO: Got endpoints: latency-svc-cchpx [659.018763ms] Apr 16 00:03:55.067: INFO: Created: latency-svc-4wqlm Apr 16 00:03:55.092: INFO: Got endpoints: latency-svc-4wqlm [758.292571ms] Apr 16 00:03:55.092: INFO: Created: latency-svc-cl7zw Apr 16 00:03:55.100: INFO: Got endpoints: latency-svc-cl7zw [724.796101ms] Apr 16 00:03:55.130: INFO: Created: latency-svc-8msxw Apr 16 00:03:55.136: INFO: Got endpoints: latency-svc-8msxw [668.621704ms] Apr 16 00:03:55.155: INFO: Created: latency-svc-5gbs6 Apr 16 00:03:55.223: INFO: Got endpoints: latency-svc-5gbs6 [745.296478ms] Apr 16 00:03:55.242: INFO: Created: latency-svc-cztz6 Apr 16 00:03:55.262: INFO: Got endpoints: latency-svc-cztz6 [736.7461ms] Apr 16 00:03:55.278: INFO: Created: latency-svc-t5bpw Apr 16 00:03:55.291: INFO: Got endpoints: latency-svc-t5bpw [725.097208ms] Apr 16 00:03:55.354: INFO: Created: latency-svc-wqlbp Apr 16 00:03:55.379: INFO: Got endpoints: latency-svc-wqlbp [752.334328ms] Apr 16 00:03:55.419: INFO: Created: latency-svc-rf696 Apr 16 00:03:55.429: INFO: Got endpoints: latency-svc-rf696 [766.168742ms] Apr 16 00:03:55.446: INFO: Created: latency-svc-p55m2 Apr 16 00:03:55.474: INFO: Got endpoints: latency-svc-p55m2 [781.542998ms] Apr 16 00:03:55.488: INFO: Created: latency-svc-vqhfd Apr 16 00:03:55.512: INFO: Got endpoints: latency-svc-vqhfd [759.924206ms] Apr 16 00:03:55.538: INFO: Created: latency-svc-bl9dc Apr 16 00:03:55.549: INFO: Got endpoints: latency-svc-bl9dc [761.436901ms] Apr 16 00:03:55.570: INFO: Created: latency-svc-ftv4h Apr 16 00:03:55.606: INFO: Got endpoints: latency-svc-ftv4h [786.80264ms] Apr 16 00:03:55.616: INFO: Created: latency-svc-6wh57 Apr 16 00:03:55.638: INFO: Got endpoints: latency-svc-6wh57 [717.513358ms] Apr 16 00:03:55.674: INFO: Created: latency-svc-jhd8b Apr 16 00:03:55.700: INFO: Got endpoints: latency-svc-jhd8b [766.667157ms] Apr 16 00:03:55.761: INFO: Created: latency-svc-t8plw Apr 16 00:03:55.784: INFO: Got endpoints: latency-svc-t8plw [815.790674ms] Apr 16 00:03:55.785: INFO: Created: latency-svc-kq54w Apr 16 00:03:55.808: INFO: Got endpoints: latency-svc-kq54w [716.257928ms] Apr 16 00:03:55.808: INFO: Latencies: [82.682657ms 112.038969ms 142.797136ms 197.108826ms 220.561443ms 251.129862ms 280.170851ms 352.81112ms 363.47296ms 393.211658ms 423.575132ms 512.915297ms 520.364818ms 521.245845ms 521.545887ms 522.875457ms 526.799009ms 529.632141ms 537.91215ms 538.356593ms 538.407416ms 538.761072ms 539.173599ms 539.294691ms 541.190582ms 550.821695ms 551.469889ms 551.789101ms 555.940184ms 556.924179ms 561.766171ms 562.341428ms 563.451736ms 563.597932ms 575.446715ms 575.871388ms 581.553587ms 584.468188ms 592.941973ms 593.517813ms 594.457935ms 596.270302ms 596.303634ms 599.427544ms 599.623761ms 604.551973ms 605.428724ms 609.413858ms 609.583775ms 611.867324ms 613.37017ms 614.848787ms 620.949555ms 624.425683ms 624.528477ms 626.51821ms 629.459233ms 629.962057ms 638.10877ms 640.975887ms 646.141415ms 647.037233ms 647.149521ms 647.70513ms 652.479446ms 652.653585ms 654.410548ms 656.320937ms 659.018763ms 666.292168ms 668.193433ms 668.355089ms 668.517343ms 668.621704ms 670.138197ms 670.290092ms 670.395535ms 670.51697ms 670.73529ms 671.066116ms 671.071089ms 673.706161ms 675.724784ms 676.519363ms 676.663675ms 677.15649ms 677.178402ms 677.481801ms 678.37931ms 680.245634ms 682.334168ms 682.459234ms 683.300946ms 684.171402ms 685.138981ms 685.933767ms 686.341554ms 687.739089ms 687.895393ms 689.77651ms 690.433546ms 690.730782ms 694.351894ms 694.611286ms 694.895031ms 695.356086ms 697.023501ms 700.495186ms 700.726228ms 700.931839ms 700.95041ms 701.11253ms 701.974365ms 702.959037ms 704.224567ms 705.365418ms 706.572518ms 706.815906ms 707.083986ms 709.905111ms 711.268905ms 712.774641ms 712.983756ms 713.031437ms 714.422858ms 714.676467ms 716.257928ms 717.15342ms 717.513358ms 717.550616ms 717.552616ms 718.431194ms 718.635062ms 718.746659ms 718.764491ms 718.786278ms 719.247691ms 720.613057ms 720.694185ms 722.4076ms 724.222636ms 724.796101ms 725.097208ms 727.849535ms 731.260154ms 731.547415ms 732.421932ms 733.32745ms 736.7461ms 736.814527ms 737.330632ms 741.688009ms 742.337894ms 743.190417ms 745.296478ms 751.506507ms 751.680029ms 752.334328ms 755.112434ms 755.245435ms 756.077548ms 758.292571ms 759.924206ms 760.504567ms 761.377945ms 761.436901ms 766.168742ms 766.667157ms 768.058056ms 769.520762ms 774.076548ms 776.342612ms 781.542998ms 786.80264ms 789.709531ms 791.337972ms 808.158115ms 813.006804ms 813.592737ms 815.790674ms 824.900439ms 826.285546ms 855.55983ms 860.827638ms 868.71269ms 869.574179ms 869.781672ms 886.47259ms 887.409566ms 892.839834ms 897.955773ms 918.607076ms 923.048972ms 936.170642ms 950.201943ms 956.557799ms 1.012157485s 1.012309874s 1.090388048s 1.091519373s] Apr 16 00:03:55.808: INFO: 50 %ile: 690.433546ms Apr 16 00:03:55.808: INFO: 90 %ile: 824.900439ms Apr 16 00:03:55.808: INFO: 99 %ile: 1.090388048s Apr 16 00:03:55.808: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:03:55.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-483" for this suite. • [SLOW TEST:13.985 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":81,"skipped":1304,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:03:55.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1495.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1495.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 16 00:04:02.031: INFO: DNS probes using dns-test-a597aa89-a341-4ade-8ef4-cbc2b7b5f846 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1495.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1495.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 16 00:04:08.326: INFO: File wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local from pod dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 16 00:04:08.336: INFO: File jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local from pod dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 16 00:04:08.336: INFO: Lookups using dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb failed for: [wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local] Apr 16 00:04:13.348: INFO: File wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local from pod dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 16 00:04:13.362: INFO: File jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local from pod dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 16 00:04:13.362: INFO: Lookups using dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb failed for: [wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local] Apr 16 00:04:18.349: INFO: File wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local from pod dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 16 00:04:18.364: INFO: File jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local from pod dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 16 00:04:18.364: INFO: Lookups using dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb failed for: [wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local] Apr 16 00:04:23.341: INFO: File wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local from pod dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 16 00:04:23.344: INFO: File jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local from pod dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 16 00:04:23.344: INFO: Lookups using dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb failed for: [wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local] Apr 16 00:04:28.341: INFO: File wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local from pod dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 16 00:04:28.344: INFO: File jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local from pod dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 16 00:04:28.344: INFO: Lookups using dns-1495/dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb failed for: [wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local] Apr 16 00:04:33.345: INFO: DNS probes using dns-test-cdbc3bbd-0beb-4917-a627-2d3724ca40fb succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1495.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1495.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1495.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1495.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 16 00:04:39.732: INFO: DNS probes using dns-test-77293af1-4042-4d77-befd-e9d83b6ae170 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:04:39.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1495" for this suite. • [SLOW TEST:43.974 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":82,"skipped":1312,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:04:39.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-4063 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4063 to expose endpoints map[] Apr 16 00:04:40.292: INFO: successfully validated that service multi-endpoint-test in namespace services-4063 exposes endpoints map[] (40.844807ms elapsed) STEP: Creating pod pod1 in namespace services-4063 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4063 to expose endpoints map[pod1:[100]] Apr 16 00:04:43.425: INFO: successfully validated that service multi-endpoint-test in namespace services-4063 exposes endpoints map[pod1:[100]] (3.12138744s elapsed) STEP: Creating pod pod2 in namespace services-4063 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4063 to expose endpoints map[pod1:[100] pod2:[101]] Apr 16 00:04:47.546: INFO: successfully validated that service multi-endpoint-test in namespace services-4063 exposes endpoints map[pod1:[100] pod2:[101]] (4.116970556s elapsed) STEP: Deleting pod pod1 in namespace services-4063 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4063 to expose endpoints map[pod2:[101]] Apr 16 00:04:48.619: INFO: successfully validated that service multi-endpoint-test in namespace services-4063 exposes endpoints map[pod2:[101]] (1.06712155s elapsed) STEP: Deleting pod pod2 in namespace services-4063 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4063 to expose endpoints map[] Apr 16 00:04:49.638: INFO: successfully validated that service multi-endpoint-test in namespace services-4063 exposes endpoints map[] (1.014511043s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:04:49.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4063" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:9.918 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":83,"skipped":1330,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:04:49.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 16 00:04:49.760: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:04:56.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4691" for this suite. • [SLOW TEST:7.265 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":84,"skipped":1338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:04:56.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-b011cf5e-6125-471a-9868-9d8bc3e2ac08 STEP: Creating a pod to test consume secrets Apr 16 00:04:57.073: INFO: Waiting up to 5m0s for pod "pod-secrets-538d1d58-9394-478c-8cbf-148f49af2bad" in namespace "secrets-7044" to be "Succeeded or Failed" Apr 16 00:04:57.076: INFO: Pod "pod-secrets-538d1d58-9394-478c-8cbf-148f49af2bad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.516703ms Apr 16 00:04:59.083: INFO: Pod "pod-secrets-538d1d58-9394-478c-8cbf-148f49af2bad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009226587s Apr 16 00:05:01.087: INFO: Pod "pod-secrets-538d1d58-9394-478c-8cbf-148f49af2bad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013252798s STEP: Saw pod success Apr 16 00:05:01.087: INFO: Pod "pod-secrets-538d1d58-9394-478c-8cbf-148f49af2bad" satisfied condition "Succeeded or Failed" Apr 16 00:05:01.109: INFO: Trying to get logs from node latest-worker pod pod-secrets-538d1d58-9394-478c-8cbf-148f49af2bad container secret-volume-test: STEP: delete the pod Apr 16 00:05:01.151: INFO: Waiting for pod pod-secrets-538d1d58-9394-478c-8cbf-148f49af2bad to disappear Apr 16 00:05:01.161: INFO: Pod pod-secrets-538d1d58-9394-478c-8cbf-148f49af2bad no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:05:01.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7044" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1370,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:05:01.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-66a6d1a1-d22b-473e-8a17-acfffa0b3873 STEP: Creating a pod to test consume secrets Apr 16 00:05:01.247: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8b4a6bd4-f996-4353-ad46-bd9673ddcf9f" in namespace "projected-5565" to be "Succeeded or Failed" Apr 16 00:05:01.251: INFO: Pod "pod-projected-secrets-8b4a6bd4-f996-4353-ad46-bd9673ddcf9f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.927268ms Apr 16 00:05:03.255: INFO: Pod "pod-projected-secrets-8b4a6bd4-f996-4353-ad46-bd9673ddcf9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007739978s Apr 16 00:05:05.259: INFO: Pod "pod-projected-secrets-8b4a6bd4-f996-4353-ad46-bd9673ddcf9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011538229s STEP: Saw pod success Apr 16 00:05:05.259: INFO: Pod "pod-projected-secrets-8b4a6bd4-f996-4353-ad46-bd9673ddcf9f" satisfied condition "Succeeded or Failed" Apr 16 00:05:05.261: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-8b4a6bd4-f996-4353-ad46-bd9673ddcf9f container projected-secret-volume-test: STEP: delete the pod Apr 16 00:05:05.302: INFO: Waiting for pod pod-projected-secrets-8b4a6bd4-f996-4353-ad46-bd9673ddcf9f to disappear Apr 16 00:05:05.305: INFO: Pod pod-projected-secrets-8b4a6bd4-f996-4353-ad46-bd9673ddcf9f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:05:05.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5565" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1385,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:05:05.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:05:05.438: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 16 00:05:06.601: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:05:07.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5990" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":87,"skipped":1391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:05:07.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 16 00:05:07.858: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:05:25.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6261" for this suite. • [SLOW TEST:17.679 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":88,"skipped":1423,"failed":0} S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:05:25.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-24cb6285-1674-48d7-9ada-649c19ba383d STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-24cb6285-1674-48d7-9ada-649c19ba383d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:05:31.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6545" for this suite. • [SLOW TEST:6.118 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1424,"failed":0} [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:05:31.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 16 00:05:31.505: INFO: Waiting up to 5m0s for pod "pod-bb4e08bd-18bb-45e7-8a0e-3433c025fc7d" in namespace "emptydir-5820" to be "Succeeded or Failed" Apr 16 00:05:31.522: INFO: Pod "pod-bb4e08bd-18bb-45e7-8a0e-3433c025fc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.510007ms Apr 16 00:05:33.525: INFO: Pod "pod-bb4e08bd-18bb-45e7-8a0e-3433c025fc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019983387s Apr 16 00:05:35.529: INFO: Pod "pod-bb4e08bd-18bb-45e7-8a0e-3433c025fc7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023710817s STEP: Saw pod success Apr 16 00:05:35.529: INFO: Pod "pod-bb4e08bd-18bb-45e7-8a0e-3433c025fc7d" satisfied condition "Succeeded or Failed" Apr 16 00:05:35.532: INFO: Trying to get logs from node latest-worker2 pod pod-bb4e08bd-18bb-45e7-8a0e-3433c025fc7d container test-container: STEP: delete the pod Apr 16 00:05:35.552: INFO: Waiting for pod pod-bb4e08bd-18bb-45e7-8a0e-3433c025fc7d to disappear Apr 16 00:05:35.556: INFO: Pod pod-bb4e08bd-18bb-45e7-8a0e-3433c025fc7d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:05:35.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5820" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:05:35.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 00:05:35.637: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bfb3efde-3038-429f-8757-b36139b44ccd" in namespace "projected-4343" to be "Succeeded or Failed" Apr 16 00:05:35.659: INFO: Pod "downwardapi-volume-bfb3efde-3038-429f-8757-b36139b44ccd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.188249ms Apr 16 00:05:37.663: INFO: Pod "downwardapi-volume-bfb3efde-3038-429f-8757-b36139b44ccd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026047953s Apr 16 00:05:39.668: INFO: Pod "downwardapi-volume-bfb3efde-3038-429f-8757-b36139b44ccd": Phase="Running", Reason="", readiness=true. Elapsed: 4.03049708s Apr 16 00:05:41.675: INFO: Pod "downwardapi-volume-bfb3efde-3038-429f-8757-b36139b44ccd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037874782s STEP: Saw pod success Apr 16 00:05:41.675: INFO: Pod "downwardapi-volume-bfb3efde-3038-429f-8757-b36139b44ccd" satisfied condition "Succeeded or Failed" Apr 16 00:05:41.678: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-bfb3efde-3038-429f-8757-b36139b44ccd container client-container: STEP: delete the pod Apr 16 00:05:41.705: INFO: Waiting for pod downwardapi-volume-bfb3efde-3038-429f-8757-b36139b44ccd to disappear Apr 16 00:05:41.718: INFO: Pod downwardapi-volume-bfb3efde-3038-429f-8757-b36139b44ccd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:05:41.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4343" for this suite. • [SLOW TEST:6.161 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1507,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:05:41.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 16 00:05:46.385: INFO: Successfully updated pod "labelsupdatef307b335-9e01-4381-970d-b090ff8da8a4" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:05:48.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-519" for this suite. • [SLOW TEST:6.685 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:05:48.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 16 00:05:48.482: INFO: Waiting up to 5m0s for pod "var-expansion-5eb63eaf-4a79-4948-8a93-465bcd2ed683" in namespace "var-expansion-2600" to be "Succeeded or Failed" Apr 16 00:05:48.541: INFO: Pod "var-expansion-5eb63eaf-4a79-4948-8a93-465bcd2ed683": Phase="Pending", Reason="", readiness=false. Elapsed: 59.023158ms Apr 16 00:05:50.545: INFO: Pod "var-expansion-5eb63eaf-4a79-4948-8a93-465bcd2ed683": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063048338s Apr 16 00:05:52.549: INFO: Pod "var-expansion-5eb63eaf-4a79-4948-8a93-465bcd2ed683": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066933395s STEP: Saw pod success Apr 16 00:05:52.549: INFO: Pod "var-expansion-5eb63eaf-4a79-4948-8a93-465bcd2ed683" satisfied condition "Succeeded or Failed" Apr 16 00:05:52.553: INFO: Trying to get logs from node latest-worker pod var-expansion-5eb63eaf-4a79-4948-8a93-465bcd2ed683 container dapi-container: STEP: delete the pod Apr 16 00:05:52.582: INFO: Waiting for pod var-expansion-5eb63eaf-4a79-4948-8a93-465bcd2ed683 to disappear Apr 16 00:05:52.600: INFO: Pod var-expansion-5eb63eaf-4a79-4948-8a93-465bcd2ed683 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:05:52.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2600" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1563,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:05:52.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 00:05:53.151: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 00:05:55.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592353, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592353, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592353, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592353, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 00:05:58.523: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:05:58.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5905" for this suite. STEP: Destroying namespace "webhook-5905-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.996 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":94,"skipped":1578,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:05:58.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-3590/secret-test-ab0b0978-ee23-44a7-bf9b-ba342306cce4 STEP: Creating a pod to test consume secrets Apr 16 00:05:58.678: INFO: Waiting up to 5m0s for pod "pod-configmaps-82d57438-3185-469c-8a43-8b0fa32cc3b4" in namespace "secrets-3590" to be "Succeeded or Failed" Apr 16 00:05:58.721: INFO: Pod "pod-configmaps-82d57438-3185-469c-8a43-8b0fa32cc3b4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.443159ms Apr 16 00:06:00.725: INFO: Pod "pod-configmaps-82d57438-3185-469c-8a43-8b0fa32cc3b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046474138s Apr 16 00:06:02.729: INFO: Pod "pod-configmaps-82d57438-3185-469c-8a43-8b0fa32cc3b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051078746s STEP: Saw pod success Apr 16 00:06:02.730: INFO: Pod "pod-configmaps-82d57438-3185-469c-8a43-8b0fa32cc3b4" satisfied condition "Succeeded or Failed" Apr 16 00:06:02.732: INFO: Trying to get logs from node latest-worker pod pod-configmaps-82d57438-3185-469c-8a43-8b0fa32cc3b4 container env-test: STEP: delete the pod Apr 16 00:06:02.788: INFO: Waiting for pod pod-configmaps-82d57438-3185-469c-8a43-8b0fa32cc3b4 to disappear Apr 16 00:06:02.792: INFO: Pod pod-configmaps-82d57438-3185-469c-8a43-8b0fa32cc3b4 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:06:02.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3590" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1579,"failed":0} ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:06:02.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 16 00:06:02.860: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:06:12.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6863" for this suite. • [SLOW TEST:9.951 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:06:12.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 00:06:12.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ba78f74-bd87-4ff8-9c2a-c886608462cf" in namespace "projected-5524" to be "Succeeded or Failed" Apr 16 00:06:12.833: INFO: Pod "downwardapi-volume-2ba78f74-bd87-4ff8-9c2a-c886608462cf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.509531ms Apr 16 00:06:14.836: INFO: Pod "downwardapi-volume-2ba78f74-bd87-4ff8-9c2a-c886608462cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006634548s Apr 16 00:06:16.840: INFO: Pod "downwardapi-volume-2ba78f74-bd87-4ff8-9c2a-c886608462cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010920496s STEP: Saw pod success Apr 16 00:06:16.840: INFO: Pod "downwardapi-volume-2ba78f74-bd87-4ff8-9c2a-c886608462cf" satisfied condition "Succeeded or Failed" Apr 16 00:06:16.843: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2ba78f74-bd87-4ff8-9c2a-c886608462cf container client-container: STEP: delete the pod Apr 16 00:06:16.864: INFO: Waiting for pod downwardapi-volume-2ba78f74-bd87-4ff8-9c2a-c886608462cf to disappear Apr 16 00:06:16.881: INFO: Pod downwardapi-volume-2ba78f74-bd87-4ff8-9c2a-c886608462cf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:06:16.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5524" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:06:16.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 16 00:06:26.311: INFO: 9 pods remaining Apr 16 00:06:26.311: INFO: 0 pods has nil DeletionTimestamp Apr 16 00:06:26.311: INFO: Apr 16 00:06:27.314: INFO: 0 pods remaining Apr 16 00:06:27.314: INFO: 0 pods has nil DeletionTimestamp Apr 16 00:06:27.314: INFO: STEP: Gathering metrics W0416 00:06:27.750891 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 16 00:06:27.750: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:06:27.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6798" for this suite. • [SLOW TEST:10.945 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":98,"skipped":1633,"failed":0} [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:06:27.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 00:06:28.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a46e0268-e3d5-40d0-9609-7a06f72ffdf5" in namespace "downward-api-7864" to be "Succeeded or Failed" Apr 16 00:06:28.816: INFO: Pod "downwardapi-volume-a46e0268-e3d5-40d0-9609-7a06f72ffdf5": Phase="Pending", Reason="", readiness=false. Elapsed: 33.594801ms Apr 16 00:06:30.822: INFO: Pod "downwardapi-volume-a46e0268-e3d5-40d0-9609-7a06f72ffdf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039408472s Apr 16 00:06:32.826: INFO: Pod "downwardapi-volume-a46e0268-e3d5-40d0-9609-7a06f72ffdf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043400535s STEP: Saw pod success Apr 16 00:06:32.826: INFO: Pod "downwardapi-volume-a46e0268-e3d5-40d0-9609-7a06f72ffdf5" satisfied condition "Succeeded or Failed" Apr 16 00:06:32.829: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a46e0268-e3d5-40d0-9609-7a06f72ffdf5 container client-container: STEP: delete the pod Apr 16 00:06:32.885: INFO: Waiting for pod downwardapi-volume-a46e0268-e3d5-40d0-9609-7a06f72ffdf5 to disappear Apr 16 00:06:32.900: INFO: Pod downwardapi-volume-a46e0268-e3d5-40d0-9609-7a06f72ffdf5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:06:32.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7864" for this suite. • [SLOW TEST:5.078 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1633,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:06:32.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-4ba77f0a-5f9b-45d9-b40c-32c51fe4efaf STEP: Creating a pod to test consume configMaps Apr 16 00:06:32.973: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a329898-db5f-4251-9f5b-d3d17276a429" in namespace "configmap-9015" to be "Succeeded or Failed" Apr 16 00:06:33.007: INFO: Pod "pod-configmaps-1a329898-db5f-4251-9f5b-d3d17276a429": Phase="Pending", Reason="", readiness=false. Elapsed: 33.581826ms Apr 16 00:06:35.021: INFO: Pod "pod-configmaps-1a329898-db5f-4251-9f5b-d3d17276a429": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047108121s Apr 16 00:06:37.025: INFO: Pod "pod-configmaps-1a329898-db5f-4251-9f5b-d3d17276a429": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051634495s STEP: Saw pod success Apr 16 00:06:37.025: INFO: Pod "pod-configmaps-1a329898-db5f-4251-9f5b-d3d17276a429" satisfied condition "Succeeded or Failed" Apr 16 00:06:37.028: INFO: Trying to get logs from node latest-worker pod pod-configmaps-1a329898-db5f-4251-9f5b-d3d17276a429 container configmap-volume-test: STEP: delete the pod Apr 16 00:06:37.056: INFO: Waiting for pod pod-configmaps-1a329898-db5f-4251-9f5b-d3d17276a429 to disappear Apr 16 00:06:37.069: INFO: Pod pod-configmaps-1a329898-db5f-4251-9f5b-d3d17276a429 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:06:37.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9015" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1637,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:06:37.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 16 00:06:41.171: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3399 PodName:pod-sharedvolume-8f9140bf-2ffb-49f6-ac5d-1453e3f41830 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:06:41.171: INFO: >>> kubeConfig: /root/.kube/config I0416 00:06:41.205591 7 log.go:172] (0xc002b36840) (0xc00239abe0) Create stream I0416 00:06:41.205623 7 log.go:172] (0xc002b36840) (0xc00239abe0) Stream added, broadcasting: 1 I0416 00:06:41.207847 7 log.go:172] (0xc002b36840) Reply frame received for 1 I0416 00:06:41.207898 7 log.go:172] (0xc002b36840) (0xc002ad5ea0) Create stream I0416 00:06:41.207913 7 log.go:172] (0xc002b36840) (0xc002ad5ea0) Stream added, broadcasting: 3 I0416 00:06:41.209292 7 log.go:172] (0xc002b36840) Reply frame received for 3 I0416 00:06:41.209325 7 log.go:172] (0xc002b36840) (0xc0023d2b40) Create stream I0416 00:06:41.209339 7 log.go:172] (0xc002b36840) (0xc0023d2b40) Stream added, broadcasting: 5 I0416 00:06:41.210489 7 log.go:172] (0xc002b36840) Reply frame received for 5 I0416 00:06:41.287469 7 log.go:172] (0xc002b36840) Data frame received for 3 I0416 00:06:41.287503 7 log.go:172] (0xc002ad5ea0) (3) Data frame handling I0416 00:06:41.287514 7 log.go:172] (0xc002ad5ea0) (3) Data frame sent I0416 00:06:41.287520 7 log.go:172] (0xc002b36840) Data frame received for 3 I0416 00:06:41.287530 7 log.go:172] (0xc002ad5ea0) (3) Data frame handling I0416 00:06:41.287549 7 log.go:172] (0xc002b36840) Data frame received for 5 I0416 00:06:41.287556 7 log.go:172] (0xc0023d2b40) (5) Data frame handling I0416 00:06:41.289399 7 log.go:172] (0xc002b36840) Data frame received for 1 I0416 00:06:41.289414 7 log.go:172] (0xc00239abe0) (1) Data frame handling I0416 00:06:41.289424 7 log.go:172] (0xc00239abe0) (1) Data frame sent I0416 00:06:41.289434 7 log.go:172] (0xc002b36840) (0xc00239abe0) Stream removed, broadcasting: 1 I0416 00:06:41.289500 7 log.go:172] (0xc002b36840) Go away received I0416 00:06:41.289540 7 log.go:172] (0xc002b36840) (0xc00239abe0) Stream removed, broadcasting: 1 I0416 00:06:41.289572 7 log.go:172] (0xc002b36840) (0xc002ad5ea0) Stream removed, broadcasting: 3 I0416 00:06:41.289585 7 log.go:172] (0xc002b36840) (0xc0023d2b40) Stream removed, broadcasting: 5 Apr 16 00:06:41.289: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:06:41.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3399" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":101,"skipped":1638,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:06:41.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 16 00:06:41.871: INFO: created pod pod-service-account-defaultsa Apr 16 00:06:41.871: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 16 00:06:41.878: INFO: created pod pod-service-account-mountsa Apr 16 00:06:41.878: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 16 00:06:41.884: INFO: created pod pod-service-account-nomountsa Apr 16 00:06:41.884: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 16 00:06:41.944: INFO: created pod pod-service-account-defaultsa-mountspec Apr 16 00:06:41.944: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 16 00:06:41.962: INFO: created pod pod-service-account-mountsa-mountspec Apr 16 00:06:41.962: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 16 00:06:41.990: INFO: created pod pod-service-account-nomountsa-mountspec Apr 16 00:06:41.990: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 16 00:06:42.002: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 16 00:06:42.002: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 16 00:06:42.023: INFO: created pod pod-service-account-mountsa-nomountspec Apr 16 00:06:42.023: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 16 00:06:42.082: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 16 00:06:42.082: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:06:42.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6752" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":102,"skipped":1650,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:06:42.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6794 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6794 STEP: creating replication controller externalsvc in namespace services-6794 I0416 00:06:42.327052 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6794, replica count: 2 I0416 00:06:45.377535 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 00:06:48.377741 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 00:06:51.377951 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 00:06:54.378164 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 16 00:06:54.782: INFO: Creating new exec pod Apr 16 00:06:58.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6794 execpod97762 -- /bin/sh -x -c nslookup clusterip-service' Apr 16 00:06:59.204: INFO: stderr: "I0416 00:06:59.101804 1532 log.go:172] (0xc0003c78c0) (0xc00092e0a0) Create stream\nI0416 00:06:59.101869 1532 log.go:172] (0xc0003c78c0) (0xc00092e0a0) Stream added, broadcasting: 1\nI0416 00:06:59.103899 1532 log.go:172] (0xc0003c78c0) Reply frame received for 1\nI0416 00:06:59.103935 1532 log.go:172] (0xc0003c78c0) (0xc000641220) Create stream\nI0416 00:06:59.103942 1532 log.go:172] (0xc0003c78c0) (0xc000641220) Stream added, broadcasting: 3\nI0416 00:06:59.104905 1532 log.go:172] (0xc0003c78c0) Reply frame received for 3\nI0416 00:06:59.104954 1532 log.go:172] (0xc0003c78c0) (0xc0005815e0) Create stream\nI0416 00:06:59.104975 1532 log.go:172] (0xc0003c78c0) (0xc0005815e0) Stream added, broadcasting: 5\nI0416 00:06:59.105875 1532 log.go:172] (0xc0003c78c0) Reply frame received for 5\nI0416 00:06:59.189750 1532 log.go:172] (0xc0003c78c0) Data frame received for 5\nI0416 00:06:59.189780 1532 log.go:172] (0xc0005815e0) (5) Data frame handling\nI0416 00:06:59.189800 1532 log.go:172] (0xc0005815e0) (5) Data frame sent\n+ nslookup clusterip-service\nI0416 00:06:59.195235 1532 log.go:172] (0xc0003c78c0) Data frame received for 3\nI0416 00:06:59.195273 1532 log.go:172] (0xc000641220) (3) Data frame handling\nI0416 00:06:59.195302 1532 log.go:172] (0xc000641220) (3) Data frame sent\nI0416 00:06:59.196595 1532 log.go:172] (0xc0003c78c0) Data frame received for 3\nI0416 00:06:59.196611 1532 log.go:172] (0xc000641220) (3) Data frame handling\nI0416 00:06:59.196621 1532 log.go:172] (0xc000641220) (3) Data frame sent\nI0416 00:06:59.197663 1532 log.go:172] (0xc0003c78c0) Data frame received for 3\nI0416 00:06:59.197685 1532 log.go:172] (0xc000641220) (3) Data frame handling\nI0416 00:06:59.197806 1532 log.go:172] (0xc0003c78c0) Data frame received for 5\nI0416 00:06:59.197823 1532 log.go:172] (0xc0005815e0) (5) Data frame handling\nI0416 00:06:59.199662 1532 log.go:172] (0xc0003c78c0) Data frame received for 1\nI0416 00:06:59.199700 1532 log.go:172] (0xc00092e0a0) (1) Data frame handling\nI0416 00:06:59.199738 1532 log.go:172] (0xc00092e0a0) (1) Data frame sent\nI0416 00:06:59.199771 1532 log.go:172] (0xc0003c78c0) (0xc00092e0a0) Stream removed, broadcasting: 1\nI0416 00:06:59.199819 1532 log.go:172] (0xc0003c78c0) Go away received\nI0416 00:06:59.200236 1532 log.go:172] (0xc0003c78c0) (0xc00092e0a0) Stream removed, broadcasting: 1\nI0416 00:06:59.200260 1532 log.go:172] (0xc0003c78c0) (0xc000641220) Stream removed, broadcasting: 3\nI0416 00:06:59.200277 1532 log.go:172] (0xc0003c78c0) (0xc0005815e0) Stream removed, broadcasting: 5\n" Apr 16 00:06:59.204: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6794.svc.cluster.local\tcanonical name = externalsvc.services-6794.svc.cluster.local.\nName:\texternalsvc.services-6794.svc.cluster.local\nAddress: 10.96.128.59\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6794, will wait for the garbage collector to delete the pods Apr 16 00:06:59.264: INFO: Deleting ReplicationController externalsvc took: 6.489224ms Apr 16 00:06:59.565: INFO: Terminating ReplicationController externalsvc pods took: 300.388996ms Apr 16 00:07:13.112: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:07:13.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6794" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:30.988 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":103,"skipped":1730,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:07:13.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 16 00:07:13.240: INFO: Waiting up to 5m0s for pod "downward-api-d639cf31-b5b7-45a7-b9f4-ea46be3fe5e4" in namespace "downward-api-7899" to be "Succeeded or Failed" Apr 16 00:07:13.255: INFO: Pod "downward-api-d639cf31-b5b7-45a7-b9f4-ea46be3fe5e4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.962879ms Apr 16 00:07:15.259: INFO: Pod "downward-api-d639cf31-b5b7-45a7-b9f4-ea46be3fe5e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018888437s Apr 16 00:07:17.263: INFO: Pod "downward-api-d639cf31-b5b7-45a7-b9f4-ea46be3fe5e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022932983s STEP: Saw pod success Apr 16 00:07:17.263: INFO: Pod "downward-api-d639cf31-b5b7-45a7-b9f4-ea46be3fe5e4" satisfied condition "Succeeded or Failed" Apr 16 00:07:17.266: INFO: Trying to get logs from node latest-worker pod downward-api-d639cf31-b5b7-45a7-b9f4-ea46be3fe5e4 container dapi-container: STEP: delete the pod Apr 16 00:07:17.294: INFO: Waiting for pod downward-api-d639cf31-b5b7-45a7-b9f4-ea46be3fe5e4 to disappear Apr 16 00:07:17.304: INFO: Pod downward-api-d639cf31-b5b7-45a7-b9f4-ea46be3fe5e4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:07:17.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7899" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1744,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:07:17.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6387 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6387 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6387 Apr 16 00:07:17.441: INFO: Found 0 stateful pods, waiting for 1 Apr 16 00:07:27.446: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 16 00:07:27.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6387 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 00:07:27.675: INFO: stderr: "I0416 00:07:27.571600 1552 log.go:172] (0xc000596000) (0xc000813220) Create stream\nI0416 00:07:27.571664 1552 log.go:172] (0xc000596000) (0xc000813220) Stream added, broadcasting: 1\nI0416 00:07:27.573473 1552 log.go:172] (0xc000596000) Reply frame received for 1\nI0416 00:07:27.573523 1552 log.go:172] (0xc000596000) (0xc000948000) Create stream\nI0416 00:07:27.573539 1552 log.go:172] (0xc000596000) (0xc000948000) Stream added, broadcasting: 3\nI0416 00:07:27.574537 1552 log.go:172] (0xc000596000) Reply frame received for 3\nI0416 00:07:27.574589 1552 log.go:172] (0xc000596000) (0xc000978000) Create stream\nI0416 00:07:27.574612 1552 log.go:172] (0xc000596000) (0xc000978000) Stream added, broadcasting: 5\nI0416 00:07:27.575549 1552 log.go:172] (0xc000596000) Reply frame received for 5\nI0416 00:07:27.643739 1552 log.go:172] (0xc000596000) Data frame received for 5\nI0416 00:07:27.643772 1552 log.go:172] (0xc000978000) (5) Data frame handling\nI0416 00:07:27.643796 1552 log.go:172] (0xc000978000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0416 00:07:27.668503 1552 log.go:172] (0xc000596000) Data frame received for 3\nI0416 00:07:27.668535 1552 log.go:172] (0xc000948000) (3) Data frame handling\nI0416 00:07:27.668551 1552 log.go:172] (0xc000948000) (3) Data frame sent\nI0416 00:07:27.668562 1552 log.go:172] (0xc000596000) Data frame received for 3\nI0416 00:07:27.668572 1552 log.go:172] (0xc000948000) (3) Data frame handling\nI0416 00:07:27.668866 1552 log.go:172] (0xc000596000) Data frame received for 5\nI0416 00:07:27.668893 1552 log.go:172] (0xc000978000) (5) Data frame handling\nI0416 00:07:27.670853 1552 log.go:172] (0xc000596000) Data frame received for 1\nI0416 00:07:27.670886 1552 log.go:172] (0xc000813220) (1) Data frame handling\nI0416 00:07:27.670907 1552 log.go:172] (0xc000813220) (1) Data frame sent\nI0416 00:07:27.670938 1552 log.go:172] (0xc000596000) (0xc000813220) Stream removed, broadcasting: 1\nI0416 00:07:27.670958 1552 log.go:172] (0xc000596000) Go away received\nI0416 00:07:27.671200 1552 log.go:172] (0xc000596000) (0xc000813220) Stream removed, broadcasting: 1\nI0416 00:07:27.671214 1552 log.go:172] (0xc000596000) (0xc000948000) Stream removed, broadcasting: 3\nI0416 00:07:27.671220 1552 log.go:172] (0xc000596000) (0xc000978000) Stream removed, broadcasting: 5\n" Apr 16 00:07:27.675: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 00:07:27.675: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 00:07:27.678: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 16 00:07:37.683: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 16 00:07:37.683: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 00:07:37.699: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999455s Apr 16 00:07:38.703: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994579971s Apr 16 00:07:39.709: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.989778191s Apr 16 00:07:40.714: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984701831s Apr 16 00:07:41.718: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.979472637s Apr 16 00:07:42.729: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.97493269s Apr 16 00:07:43.734: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.964052196s Apr 16 00:07:44.738: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.95957068s Apr 16 00:07:45.743: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.955355171s Apr 16 00:07:46.748: INFO: Verifying statefulset ss doesn't scale past 1 for another 950.698074ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6387 Apr 16 00:07:47.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6387 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:07:47.949: INFO: stderr: "I0416 00:07:47.866626 1574 log.go:172] (0xc00003a160) (0xc0007e2140) Create stream\nI0416 00:07:47.866694 1574 log.go:172] (0xc00003a160) (0xc0007e2140) Stream added, broadcasting: 1\nI0416 00:07:47.869422 1574 log.go:172] (0xc00003a160) Reply frame received for 1\nI0416 00:07:47.869464 1574 log.go:172] (0xc00003a160) (0xc000832000) Create stream\nI0416 00:07:47.869480 1574 log.go:172] (0xc00003a160) (0xc000832000) Stream added, broadcasting: 3\nI0416 00:07:47.870581 1574 log.go:172] (0xc00003a160) Reply frame received for 3\nI0416 00:07:47.870622 1574 log.go:172] (0xc00003a160) (0xc0007e21e0) Create stream\nI0416 00:07:47.870638 1574 log.go:172] (0xc00003a160) (0xc0007e21e0) Stream added, broadcasting: 5\nI0416 00:07:47.871683 1574 log.go:172] (0xc00003a160) Reply frame received for 5\nI0416 00:07:47.942535 1574 log.go:172] (0xc00003a160) Data frame received for 5\nI0416 00:07:47.942572 1574 log.go:172] (0xc0007e21e0) (5) Data frame handling\nI0416 00:07:47.942588 1574 log.go:172] (0xc0007e21e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0416 00:07:47.942618 1574 log.go:172] (0xc00003a160) Data frame received for 3\nI0416 00:07:47.942668 1574 log.go:172] (0xc000832000) (3) Data frame handling\nI0416 00:07:47.942687 1574 log.go:172] (0xc000832000) (3) Data frame sent\nI0416 00:07:47.942701 1574 log.go:172] (0xc00003a160) Data frame received for 3\nI0416 00:07:47.942713 1574 log.go:172] (0xc000832000) (3) Data frame handling\nI0416 00:07:47.942737 1574 log.go:172] (0xc00003a160) Data frame received for 5\nI0416 00:07:47.942755 1574 log.go:172] (0xc0007e21e0) (5) Data frame handling\nI0416 00:07:47.944184 1574 log.go:172] (0xc00003a160) Data frame received for 1\nI0416 00:07:47.944209 1574 log.go:172] (0xc0007e2140) (1) Data frame handling\nI0416 00:07:47.944221 1574 log.go:172] (0xc0007e2140) (1) Data frame sent\nI0416 00:07:47.944246 1574 log.go:172] (0xc00003a160) (0xc0007e2140) Stream removed, broadcasting: 1\nI0416 00:07:47.944293 1574 log.go:172] (0xc00003a160) Go away received\nI0416 00:07:47.944670 1574 log.go:172] (0xc00003a160) (0xc0007e2140) Stream removed, broadcasting: 1\nI0416 00:07:47.944691 1574 log.go:172] (0xc00003a160) (0xc000832000) Stream removed, broadcasting: 3\nI0416 00:07:47.944701 1574 log.go:172] (0xc00003a160) (0xc0007e21e0) Stream removed, broadcasting: 5\n" Apr 16 00:07:47.949: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 16 00:07:47.949: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 16 00:07:47.952: INFO: Found 1 stateful pods, waiting for 3 Apr 16 00:07:57.957: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 16 00:07:57.957: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 16 00:07:57.957: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 16 00:07:57.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6387 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 00:07:58.179: INFO: stderr: "I0416 00:07:58.106124 1594 log.go:172] (0xc0004b3970) (0xc00097c000) Create stream\nI0416 00:07:58.106192 1594 log.go:172] (0xc0004b3970) (0xc00097c000) Stream added, broadcasting: 1\nI0416 00:07:58.109813 1594 log.go:172] (0xc0004b3970) Reply frame received for 1\nI0416 00:07:58.109865 1594 log.go:172] (0xc0004b3970) (0xc000a60000) Create stream\nI0416 00:07:58.109886 1594 log.go:172] (0xc0004b3970) (0xc000a60000) Stream added, broadcasting: 3\nI0416 00:07:58.110906 1594 log.go:172] (0xc0004b3970) Reply frame received for 3\nI0416 00:07:58.110945 1594 log.go:172] (0xc0004b3970) (0xc000a600a0) Create stream\nI0416 00:07:58.110959 1594 log.go:172] (0xc0004b3970) (0xc000a600a0) Stream added, broadcasting: 5\nI0416 00:07:58.111876 1594 log.go:172] (0xc0004b3970) Reply frame received for 5\nI0416 00:07:58.172741 1594 log.go:172] (0xc0004b3970) Data frame received for 3\nI0416 00:07:58.172793 1594 log.go:172] (0xc000a60000) (3) Data frame handling\nI0416 00:07:58.172825 1594 log.go:172] (0xc000a60000) (3) Data frame sent\nI0416 00:07:58.172839 1594 log.go:172] (0xc0004b3970) Data frame received for 3\nI0416 00:07:58.172848 1594 log.go:172] (0xc000a60000) (3) Data frame handling\nI0416 00:07:58.172900 1594 log.go:172] (0xc0004b3970) Data frame received for 5\nI0416 00:07:58.172953 1594 log.go:172] (0xc000a600a0) (5) Data frame handling\nI0416 00:07:58.172983 1594 log.go:172] (0xc000a600a0) (5) Data frame sent\nI0416 00:07:58.173008 1594 log.go:172] (0xc0004b3970) Data frame received for 5\nI0416 00:07:58.173025 1594 log.go:172] (0xc000a600a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0416 00:07:58.174716 1594 log.go:172] (0xc0004b3970) Data frame received for 1\nI0416 00:07:58.174740 1594 log.go:172] (0xc00097c000) (1) Data frame handling\nI0416 00:07:58.174754 1594 log.go:172] (0xc00097c000) (1) Data frame sent\nI0416 00:07:58.174763 1594 log.go:172] (0xc0004b3970) (0xc00097c000) Stream removed, broadcasting: 1\nI0416 00:07:58.174926 1594 log.go:172] (0xc0004b3970) Go away received\nI0416 00:07:58.175020 1594 log.go:172] (0xc0004b3970) (0xc00097c000) Stream removed, broadcasting: 1\nI0416 00:07:58.175035 1594 log.go:172] (0xc0004b3970) (0xc000a60000) Stream removed, broadcasting: 3\nI0416 00:07:58.175041 1594 log.go:172] (0xc0004b3970) (0xc000a600a0) Stream removed, broadcasting: 5\n" Apr 16 00:07:58.179: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 00:07:58.179: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 00:07:58.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6387 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 00:07:58.413: INFO: stderr: "I0416 00:07:58.308899 1618 log.go:172] (0xc0000e8370) (0xc000930000) Create stream\nI0416 00:07:58.308968 1618 log.go:172] (0xc0000e8370) (0xc000930000) Stream added, broadcasting: 1\nI0416 00:07:58.311203 1618 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0416 00:07:58.311245 1618 log.go:172] (0xc0000e8370) (0xc0009300a0) Create stream\nI0416 00:07:58.311274 1618 log.go:172] (0xc0000e8370) (0xc0009300a0) Stream added, broadcasting: 3\nI0416 00:07:58.312373 1618 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0416 00:07:58.312414 1618 log.go:172] (0xc0000e8370) (0xc000855220) Create stream\nI0416 00:07:58.312424 1618 log.go:172] (0xc0000e8370) (0xc000855220) Stream added, broadcasting: 5\nI0416 00:07:58.313862 1618 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0416 00:07:58.373674 1618 log.go:172] (0xc0000e8370) Data frame received for 5\nI0416 00:07:58.373701 1618 log.go:172] (0xc000855220) (5) Data frame handling\nI0416 00:07:58.373720 1618 log.go:172] (0xc000855220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0416 00:07:58.406036 1618 log.go:172] (0xc0000e8370) Data frame received for 3\nI0416 00:07:58.406082 1618 log.go:172] (0xc0009300a0) (3) Data frame handling\nI0416 00:07:58.406098 1618 log.go:172] (0xc0009300a0) (3) Data frame sent\nI0416 00:07:58.406134 1618 log.go:172] (0xc0000e8370) Data frame received for 5\nI0416 00:07:58.406147 1618 log.go:172] (0xc000855220) (5) Data frame handling\nI0416 00:07:58.406532 1618 log.go:172] (0xc0000e8370) Data frame received for 3\nI0416 00:07:58.406552 1618 log.go:172] (0xc0009300a0) (3) Data frame handling\nI0416 00:07:58.407991 1618 log.go:172] (0xc0000e8370) Data frame received for 1\nI0416 00:07:58.408073 1618 log.go:172] (0xc000930000) (1) Data frame handling\nI0416 00:07:58.408139 1618 log.go:172] (0xc000930000) (1) Data frame sent\nI0416 00:07:58.408220 1618 log.go:172] (0xc0000e8370) (0xc000930000) Stream removed, broadcasting: 1\nI0416 00:07:58.408307 1618 log.go:172] (0xc0000e8370) Go away received\nI0416 00:07:58.408695 1618 log.go:172] (0xc0000e8370) (0xc000930000) Stream removed, broadcasting: 1\nI0416 00:07:58.408726 1618 log.go:172] (0xc0000e8370) (0xc0009300a0) Stream removed, broadcasting: 3\nI0416 00:07:58.408741 1618 log.go:172] (0xc0000e8370) (0xc000855220) Stream removed, broadcasting: 5\n" Apr 16 00:07:58.413: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 00:07:58.413: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 00:07:58.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6387 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 00:07:58.663: INFO: stderr: "I0416 00:07:58.554586 1641 log.go:172] (0xc0009c6160) (0xc000368b40) Create stream\nI0416 00:07:58.554654 1641 log.go:172] (0xc0009c6160) (0xc000368b40) Stream added, broadcasting: 1\nI0416 00:07:58.557277 1641 log.go:172] (0xc0009c6160) Reply frame received for 1\nI0416 00:07:58.557319 1641 log.go:172] (0xc0009c6160) (0xc0009d2000) Create stream\nI0416 00:07:58.557332 1641 log.go:172] (0xc0009c6160) (0xc0009d2000) Stream added, broadcasting: 3\nI0416 00:07:58.558412 1641 log.go:172] (0xc0009c6160) Reply frame received for 3\nI0416 00:07:58.558442 1641 log.go:172] (0xc0009c6160) (0xc0006bd2c0) Create stream\nI0416 00:07:58.558451 1641 log.go:172] (0xc0009c6160) (0xc0006bd2c0) Stream added, broadcasting: 5\nI0416 00:07:58.559381 1641 log.go:172] (0xc0009c6160) Reply frame received for 5\nI0416 00:07:58.625630 1641 log.go:172] (0xc0009c6160) Data frame received for 5\nI0416 00:07:58.625669 1641 log.go:172] (0xc0006bd2c0) (5) Data frame handling\nI0416 00:07:58.625699 1641 log.go:172] (0xc0006bd2c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0416 00:07:58.655786 1641 log.go:172] (0xc0009c6160) Data frame received for 3\nI0416 00:07:58.655822 1641 log.go:172] (0xc0009d2000) (3) Data frame handling\nI0416 00:07:58.655855 1641 log.go:172] (0xc0009d2000) (3) Data frame sent\nI0416 00:07:58.656087 1641 log.go:172] (0xc0009c6160) Data frame received for 5\nI0416 00:07:58.656129 1641 log.go:172] (0xc0006bd2c0) (5) Data frame handling\nI0416 00:07:58.656172 1641 log.go:172] (0xc0009c6160) Data frame received for 3\nI0416 00:07:58.656213 1641 log.go:172] (0xc0009d2000) (3) Data frame handling\nI0416 00:07:58.658551 1641 log.go:172] (0xc0009c6160) Data frame received for 1\nI0416 00:07:58.658592 1641 log.go:172] (0xc000368b40) (1) Data frame handling\nI0416 00:07:58.658623 1641 log.go:172] (0xc000368b40) (1) Data frame sent\nI0416 00:07:58.658656 1641 log.go:172] (0xc0009c6160) (0xc000368b40) Stream removed, broadcasting: 1\nI0416 00:07:58.658702 1641 log.go:172] (0xc0009c6160) Go away received\nI0416 00:07:58.659121 1641 log.go:172] (0xc0009c6160) (0xc000368b40) Stream removed, broadcasting: 1\nI0416 00:07:58.659147 1641 log.go:172] (0xc0009c6160) (0xc0009d2000) Stream removed, broadcasting: 3\nI0416 00:07:58.659159 1641 log.go:172] (0xc0009c6160) (0xc0006bd2c0) Stream removed, broadcasting: 5\n" Apr 16 00:07:58.663: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 00:07:58.663: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 00:07:58.663: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 00:07:58.667: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 16 00:08:08.675: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 16 00:08:08.675: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 16 00:08:08.675: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 16 00:08:08.699: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999506s Apr 16 00:08:09.704: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983516482s Apr 16 00:08:10.708: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978460959s Apr 16 00:08:11.713: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.973757483s Apr 16 00:08:12.718: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.968893299s Apr 16 00:08:13.723: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.963758396s Apr 16 00:08:14.728: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.95890445s Apr 16 00:08:15.733: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.953981248s Apr 16 00:08:16.737: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.94854634s Apr 16 00:08:17.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 944.520921ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6387 Apr 16 00:08:18.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6387 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:08:18.980: INFO: stderr: "I0416 00:08:18.887203 1664 log.go:172] (0xc000aa4000) (0xc0004be280) Create stream\nI0416 00:08:18.887272 1664 log.go:172] (0xc000aa4000) (0xc0004be280) Stream added, broadcasting: 1\nI0416 00:08:18.890111 1664 log.go:172] (0xc000aa4000) Reply frame received for 1\nI0416 00:08:18.890148 1664 log.go:172] (0xc000aa4000) (0xc0004be320) Create stream\nI0416 00:08:18.890160 1664 log.go:172] (0xc000aa4000) (0xc0004be320) Stream added, broadcasting: 3\nI0416 00:08:18.890951 1664 log.go:172] (0xc000aa4000) Reply frame received for 3\nI0416 00:08:18.890990 1664 log.go:172] (0xc000aa4000) (0xc000916b40) Create stream\nI0416 00:08:18.891008 1664 log.go:172] (0xc000aa4000) (0xc000916b40) Stream added, broadcasting: 5\nI0416 00:08:18.891869 1664 log.go:172] (0xc000aa4000) Reply frame received for 5\nI0416 00:08:18.974207 1664 log.go:172] (0xc000aa4000) Data frame received for 3\nI0416 00:08:18.974253 1664 log.go:172] (0xc0004be320) (3) Data frame handling\nI0416 00:08:18.974278 1664 log.go:172] (0xc0004be320) (3) Data frame sent\nI0416 00:08:18.974292 1664 log.go:172] (0xc000aa4000) Data frame received for 3\nI0416 00:08:18.974306 1664 log.go:172] (0xc0004be320) (3) Data frame handling\nI0416 00:08:18.974352 1664 log.go:172] (0xc000aa4000) Data frame received for 5\nI0416 00:08:18.974385 1664 log.go:172] (0xc000916b40) (5) Data frame handling\nI0416 00:08:18.974413 1664 log.go:172] (0xc000916b40) (5) Data frame sent\nI0416 00:08:18.974439 1664 log.go:172] (0xc000aa4000) Data frame received for 5\nI0416 00:08:18.974462 1664 log.go:172] (0xc000916b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0416 00:08:18.975750 1664 log.go:172] (0xc000aa4000) Data frame received for 1\nI0416 00:08:18.975766 1664 log.go:172] (0xc0004be280) (1) Data frame handling\nI0416 00:08:18.975788 1664 log.go:172] (0xc0004be280) (1) Data frame sent\nI0416 00:08:18.975817 1664 log.go:172] (0xc000aa4000) (0xc0004be280) Stream removed, broadcasting: 1\nI0416 00:08:18.975978 1664 log.go:172] (0xc000aa4000) Go away received\nI0416 00:08:18.976146 1664 log.go:172] (0xc000aa4000) (0xc0004be280) Stream removed, broadcasting: 1\nI0416 00:08:18.976177 1664 log.go:172] (0xc000aa4000) (0xc0004be320) Stream removed, broadcasting: 3\nI0416 00:08:18.976204 1664 log.go:172] (0xc000aa4000) (0xc000916b40) Stream removed, broadcasting: 5\n" Apr 16 00:08:18.980: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 16 00:08:18.980: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 16 00:08:18.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:08:19.177: INFO: stderr: "I0416 00:08:19.094853 1684 log.go:172] (0xc0009dc000) (0xc0009aa000) Create stream\nI0416 00:08:19.094955 1684 log.go:172] (0xc0009dc000) (0xc0009aa000) Stream added, broadcasting: 1\nI0416 00:08:19.099654 1684 log.go:172] (0xc0009dc000) Reply frame received for 1\nI0416 00:08:19.099716 1684 log.go:172] (0xc0009dc000) (0xc00097c000) Create stream\nI0416 00:08:19.099773 1684 log.go:172] (0xc0009dc000) (0xc00097c000) Stream added, broadcasting: 3\nI0416 00:08:19.101950 1684 log.go:172] (0xc0009dc000) Reply frame received for 3\nI0416 00:08:19.102001 1684 log.go:172] (0xc0009dc000) (0xc0009aa0a0) Create stream\nI0416 00:08:19.102021 1684 log.go:172] (0xc0009dc000) (0xc0009aa0a0) Stream added, broadcasting: 5\nI0416 00:08:19.103218 1684 log.go:172] (0xc0009dc000) Reply frame received for 5\nI0416 00:08:19.166957 1684 log.go:172] (0xc0009dc000) Data frame received for 3\nI0416 00:08:19.166990 1684 log.go:172] (0xc00097c000) (3) Data frame handling\nI0416 00:08:19.167012 1684 log.go:172] (0xc00097c000) (3) Data frame sent\nI0416 00:08:19.167095 1684 log.go:172] (0xc0009dc000) Data frame received for 3\nI0416 00:08:19.167125 1684 log.go:172] (0xc00097c000) (3) Data frame handling\nI0416 00:08:19.169271 1684 log.go:172] (0xc0009dc000) Data frame received for 5\nI0416 00:08:19.169300 1684 log.go:172] (0xc0009aa0a0) (5) Data frame handling\nI0416 00:08:19.169334 1684 log.go:172] (0xc0009aa0a0) (5) Data frame sent\nI0416 00:08:19.169344 1684 log.go:172] (0xc0009dc000) Data frame received for 5\nI0416 00:08:19.169351 1684 log.go:172] (0xc0009aa0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0416 00:08:19.172768 1684 log.go:172] (0xc0009dc000) Data frame received for 1\nI0416 00:08:19.172792 1684 log.go:172] (0xc0009aa000) (1) Data frame handling\nI0416 00:08:19.172807 1684 log.go:172] (0xc0009aa000) (1) Data frame sent\nI0416 00:08:19.172822 1684 log.go:172] (0xc0009dc000) (0xc0009aa000) Stream removed, broadcasting: 1\nI0416 00:08:19.172897 1684 log.go:172] (0xc0009dc000) Go away received\nI0416 00:08:19.173164 1684 log.go:172] (0xc0009dc000) (0xc0009aa000) Stream removed, broadcasting: 1\nI0416 00:08:19.173185 1684 log.go:172] (0xc0009dc000) (0xc00097c000) Stream removed, broadcasting: 3\nI0416 00:08:19.173196 1684 log.go:172] (0xc0009dc000) (0xc0009aa0a0) Stream removed, broadcasting: 5\n" Apr 16 00:08:19.177: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 16 00:08:19.177: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 16 00:08:19.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6387 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 00:08:19.371: INFO: stderr: "I0416 00:08:19.299291 1706 log.go:172] (0xc00003a840) (0xc00056ab40) Create stream\nI0416 00:08:19.299363 1706 log.go:172] (0xc00003a840) (0xc00056ab40) Stream added, broadcasting: 1\nI0416 00:08:19.302741 1706 log.go:172] (0xc00003a840) Reply frame received for 1\nI0416 00:08:19.302769 1706 log.go:172] (0xc00003a840) (0xc00092a000) Create stream\nI0416 00:08:19.302777 1706 log.go:172] (0xc00003a840) (0xc00092a000) Stream added, broadcasting: 3\nI0416 00:08:19.303836 1706 log.go:172] (0xc00003a840) Reply frame received for 3\nI0416 00:08:19.303864 1706 log.go:172] (0xc00003a840) (0xc0009c0000) Create stream\nI0416 00:08:19.303881 1706 log.go:172] (0xc00003a840) (0xc0009c0000) Stream added, broadcasting: 5\nI0416 00:08:19.304850 1706 log.go:172] (0xc00003a840) Reply frame received for 5\nI0416 00:08:19.363161 1706 log.go:172] (0xc00003a840) Data frame received for 3\nI0416 00:08:19.363207 1706 log.go:172] (0xc00092a000) (3) Data frame handling\nI0416 00:08:19.363236 1706 log.go:172] (0xc00003a840) Data frame received for 5\nI0416 00:08:19.363270 1706 log.go:172] (0xc0009c0000) (5) Data frame handling\nI0416 00:08:19.363293 1706 log.go:172] (0xc0009c0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0416 00:08:19.363321 1706 log.go:172] (0xc00092a000) (3) Data frame sent\nI0416 00:08:19.363343 1706 log.go:172] (0xc00003a840) Data frame received for 3\nI0416 00:08:19.363361 1706 log.go:172] (0xc00092a000) (3) Data frame handling\nI0416 00:08:19.363460 1706 log.go:172] (0xc00003a840) Data frame received for 5\nI0416 00:08:19.363498 1706 log.go:172] (0xc0009c0000) (5) Data frame handling\nI0416 00:08:19.365568 1706 log.go:172] (0xc00003a840) Data frame received for 1\nI0416 00:08:19.365636 1706 log.go:172] (0xc00056ab40) (1) Data frame handling\nI0416 00:08:19.365657 1706 log.go:172] (0xc00056ab40) (1) Data frame sent\nI0416 00:08:19.365672 1706 log.go:172] (0xc00003a840) (0xc00056ab40) Stream removed, broadcasting: 1\nI0416 00:08:19.365691 1706 log.go:172] (0xc00003a840) Go away received\nI0416 00:08:19.366217 1706 log.go:172] (0xc00003a840) (0xc00056ab40) Stream removed, broadcasting: 1\nI0416 00:08:19.366242 1706 log.go:172] (0xc00003a840) (0xc00092a000) Stream removed, broadcasting: 3\nI0416 00:08:19.366265 1706 log.go:172] (0xc00003a840) (0xc0009c0000) Stream removed, broadcasting: 5\n" Apr 16 00:08:19.371: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 16 00:08:19.371: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 16 00:08:19.371: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 16 00:08:39.387: INFO: Deleting all statefulset in ns statefulset-6387 Apr 16 00:08:39.390: INFO: Scaling statefulset ss to 0 Apr 16 00:08:39.401: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 00:08:39.403: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:08:39.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6387" for this suite. • [SLOW TEST:82.116 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":105,"skipped":1746,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:08:39.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 16 00:08:39.528: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 16 00:08:44.550: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:08:45.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6855" for this suite. • [SLOW TEST:6.147 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":106,"skipped":1769,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:08:45.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-40ef7746-8041-49a1-8ebc-651ed83a9b6e [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:08:45.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5703" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":107,"skipped":1808,"failed":0} SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:08:45.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-829fd7c1-aca0-4c0b-af00-8a1a4d773662 STEP: Creating a pod to test consume secrets Apr 16 00:08:46.003: INFO: Waiting up to 5m0s for pod "pod-secrets-4c1f810a-8029-48e6-a4ff-0565edfa2e53" in namespace "secrets-2341" to be "Succeeded or Failed" Apr 16 00:08:46.006: INFO: Pod "pod-secrets-4c1f810a-8029-48e6-a4ff-0565edfa2e53": Phase="Pending", Reason="", readiness=false. Elapsed: 3.686354ms Apr 16 00:08:48.305: INFO: Pod "pod-secrets-4c1f810a-8029-48e6-a4ff-0565edfa2e53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302438466s Apr 16 00:08:50.308: INFO: Pod "pod-secrets-4c1f810a-8029-48e6-a4ff-0565edfa2e53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305866833s Apr 16 00:08:52.312: INFO: Pod "pod-secrets-4c1f810a-8029-48e6-a4ff-0565edfa2e53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.30975507s STEP: Saw pod success Apr 16 00:08:52.312: INFO: Pod "pod-secrets-4c1f810a-8029-48e6-a4ff-0565edfa2e53" satisfied condition "Succeeded or Failed" Apr 16 00:08:52.315: INFO: Trying to get logs from node latest-worker pod pod-secrets-4c1f810a-8029-48e6-a4ff-0565edfa2e53 container secret-volume-test: STEP: delete the pod Apr 16 00:08:52.349: INFO: Waiting for pod pod-secrets-4c1f810a-8029-48e6-a4ff-0565edfa2e53 to disappear Apr 16 00:08:52.360: INFO: Pod pod-secrets-4c1f810a-8029-48e6-a4ff-0565edfa2e53 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:08:52.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2341" for this suite. STEP: Destroying namespace "secret-namespace-3333" for this suite. • [SLOW TEST:6.676 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1810,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:08:52.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-e8cb4c16-92bb-49ad-a07c-37271b7ab390 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:08:56.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7212" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1819,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:08:56.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 16 00:08:56.685: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:09:02.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-183" for this suite. • [SLOW TEST:5.849 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":110,"skipped":1824,"failed":0} SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:09:02.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:09:02.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2665" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":111,"skipped":1829,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:09:02.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 16 00:09:10.924: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 16 00:09:10.975: INFO: Pod pod-with-poststart-http-hook still exists Apr 16 00:09:12.976: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 16 00:09:12.981: INFO: Pod pod-with-poststart-http-hook still exists Apr 16 00:09:14.976: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 16 00:09:14.980: INFO: Pod pod-with-poststart-http-hook still exists Apr 16 00:09:16.976: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 16 00:09:16.980: INFO: Pod pod-with-poststart-http-hook still exists Apr 16 00:09:18.976: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 16 00:09:18.981: INFO: Pod pod-with-poststart-http-hook still exists Apr 16 00:09:20.976: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 16 00:09:20.980: INFO: Pod pod-with-poststart-http-hook still exists Apr 16 00:09:22.976: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 16 00:09:22.980: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:09:22.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8897" for this suite. • [SLOW TEST:20.425 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1832,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:09:22.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:09:23.100: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.941561ms) Apr 16 00:09:23.104: INFO: (1) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.960865ms) Apr 16 00:09:23.109: INFO: (2) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.643925ms) Apr 16 00:09:23.112: INFO: (3) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.813512ms) Apr 16 00:09:23.115: INFO: (4) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.795214ms) Apr 16 00:09:23.118: INFO: (5) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.918157ms) Apr 16 00:09:23.120: INFO: (6) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.509888ms) Apr 16 00:09:23.123: INFO: (7) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.65716ms) Apr 16 00:09:23.126: INFO: (8) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.827409ms) Apr 16 00:09:23.128: INFO: (9) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.616799ms) Apr 16 00:09:23.131: INFO: (10) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.643813ms) Apr 16 00:09:23.134: INFO: (11) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.920689ms) Apr 16 00:09:23.137: INFO: (12) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.885827ms) Apr 16 00:09:23.140: INFO: (13) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.913561ms) Apr 16 00:09:23.143: INFO: (14) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.200982ms) Apr 16 00:09:23.146: INFO: (15) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.882461ms) Apr 16 00:09:23.149: INFO: (16) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.899693ms) Apr 16 00:09:23.152: INFO: (17) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.051408ms) Apr 16 00:09:23.155: INFO: (18) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.092895ms) Apr 16 00:09:23.158: INFO: (19) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.047625ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:09:23.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4993" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":113,"skipped":1901,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:09:23.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 16 00:09:23.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-741' Apr 16 00:09:23.382: INFO: stderr: "" Apr 16 00:09:23.382: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 16 00:09:23.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-741' Apr 16 00:09:32.797: INFO: stderr: "" Apr 16 00:09:32.797: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:09:32.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-741" for this suite. • [SLOW TEST:9.649 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":114,"skipped":1923,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:09:32.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 16 00:09:32.894: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:09:47.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9704" for this suite. • [SLOW TEST:14.821 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":115,"skipped":1929,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:09:47.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-ef016729-b9cd-4e63-91a1-d483809dc725 STEP: Creating a pod to test consume secrets Apr 16 00:09:47.699: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b7bcde1a-1c89-4159-89f1-8ed1d560434b" in namespace "projected-4587" to be "Succeeded or Failed" Apr 16 00:09:47.730: INFO: Pod "pod-projected-secrets-b7bcde1a-1c89-4159-89f1-8ed1d560434b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.574037ms Apr 16 00:09:49.734: INFO: Pod "pod-projected-secrets-b7bcde1a-1c89-4159-89f1-8ed1d560434b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034766741s Apr 16 00:09:51.739: INFO: Pod "pod-projected-secrets-b7bcde1a-1c89-4159-89f1-8ed1d560434b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039237614s STEP: Saw pod success Apr 16 00:09:51.739: INFO: Pod "pod-projected-secrets-b7bcde1a-1c89-4159-89f1-8ed1d560434b" satisfied condition "Succeeded or Failed" Apr 16 00:09:51.741: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-b7bcde1a-1c89-4159-89f1-8ed1d560434b container projected-secret-volume-test: STEP: delete the pod Apr 16 00:09:51.773: INFO: Waiting for pod pod-projected-secrets-b7bcde1a-1c89-4159-89f1-8ed1d560434b to disappear Apr 16 00:09:51.795: INFO: Pod pod-projected-secrets-b7bcde1a-1c89-4159-89f1-8ed1d560434b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:09:51.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4587" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1968,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:09:51.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 16 00:09:51.839: INFO: PodSpec: initContainers in spec.initContainers Apr 16 00:10:42.726: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-53b133f1-7c92-4653-9f51-fb94008096c8", GenerateName:"", Namespace:"init-container-1584", SelfLink:"/api/v1/namespaces/init-container-1584/pods/pod-init-53b133f1-7c92-4653-9f51-fb94008096c8", UID:"b5892bec-a0f8-4f26-81f4-91f0de26c4f9", ResourceVersion:"8402425", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722592591, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"839694513"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-kvvf9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003e46cc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kvvf9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kvvf9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kvvf9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0044e9ee8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000eec2a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0044e9f70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0044e9f90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0044e9f98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0044e9f9c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592591, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592591, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592591, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592591, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.1.239", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.239"}}, StartTime:(*v1.Time)(0xc0032d3e40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000eec460)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000eec540)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://c8646de5fb43d14045ad394eec152badc007f44e52609af09f5fba37054f794d", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0032d3e80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0032d3e60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00451801f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:10:42.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1584" for this suite. • [SLOW TEST:50.967 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":117,"skipped":2018,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:10:42.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 00:10:42.864: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25f241f7-2cf2-44d4-ab54-457b476ef5be" in namespace "projected-8400" to be "Succeeded or Failed" Apr 16 00:10:42.883: INFO: Pod "downwardapi-volume-25f241f7-2cf2-44d4-ab54-457b476ef5be": Phase="Pending", Reason="", readiness=false. Elapsed: 19.016652ms Apr 16 00:10:44.893: INFO: Pod "downwardapi-volume-25f241f7-2cf2-44d4-ab54-457b476ef5be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028436435s Apr 16 00:10:46.896: INFO: Pod "downwardapi-volume-25f241f7-2cf2-44d4-ab54-457b476ef5be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031980587s STEP: Saw pod success Apr 16 00:10:46.897: INFO: Pod "downwardapi-volume-25f241f7-2cf2-44d4-ab54-457b476ef5be" satisfied condition "Succeeded or Failed" Apr 16 00:10:46.900: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-25f241f7-2cf2-44d4-ab54-457b476ef5be container client-container: STEP: delete the pod Apr 16 00:10:46.930: INFO: Waiting for pod downwardapi-volume-25f241f7-2cf2-44d4-ab54-457b476ef5be to disappear Apr 16 00:10:46.935: INFO: Pod downwardapi-volume-25f241f7-2cf2-44d4-ab54-457b476ef5be no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:10:46.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8400" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":2031,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:10:46.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-beb9fd5c-3e6b-4cfa-850d-5e94a86b4ab4 STEP: Creating secret with name s-test-opt-upd-f17850df-49e4-4f5f-bcce-df94ad7a46c5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-beb9fd5c-3e6b-4cfa-850d-5e94a86b4ab4 STEP: Updating secret s-test-opt-upd-f17850df-49e4-4f5f-bcce-df94ad7a46c5 STEP: Creating secret with name s-test-opt-create-b7c55137-8b73-4acd-a0a5-875293b28a90 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:10:57.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1309" for this suite. • [SLOW TEST:10.187 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":2082,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:10:57.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 00:10:57.826: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 00:10:59.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592657, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592657, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592657, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592657, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 00:11:02.865: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:11:03.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3832" for this suite. STEP: Destroying namespace "webhook-3832-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.262 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":120,"skipped":2092,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:11:03.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 00:11:04.207: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 00:11:06.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592664, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592664, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592664, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592664, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 00:11:08.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592664, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592664, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592664, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592664, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 00:11:11.249: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 16 00:11:15.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-5548 to-be-attached-pod -i -c=container1' Apr 16 00:11:17.811: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:11:17.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5548" for this suite. STEP: Destroying namespace "webhook-5548-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.539 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":121,"skipped":2107,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:11:17.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 00:11:18.691: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 00:11:20.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592678, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592678, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592678, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592678, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 00:11:23.732: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:11:24.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6428" for this suite. STEP: Destroying namespace "webhook-6428-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.369 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":122,"skipped":2117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:11:24.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 16 00:11:24.404: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 16 00:11:24.422: INFO: Waiting for terminating namespaces to be deleted... Apr 16 00:11:24.425: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 16 00:11:24.430: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:11:24.430: INFO: Container kindnet-cni ready: true, restart count 0 Apr 16 00:11:24.430: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:11:24.430: INFO: Container kube-proxy ready: true, restart count 0 Apr 16 00:11:24.430: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 16 00:11:24.444: INFO: to-be-attached-pod from webhook-5548 started at 2020-04-16 00:11:11 +0000 UTC (1 container statuses recorded) Apr 16 00:11:24.444: INFO: Container container1 ready: true, restart count 0 Apr 16 00:11:24.444: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:11:24.444: INFO: Container kindnet-cni ready: true, restart count 0 Apr 16 00:11:24.444: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:11:24.444: INFO: Container kube-proxy ready: true, restart count 0 Apr 16 00:11:24.444: INFO: webhook-to-be-mutated from webhook-6428 started at 2020-04-16 00:11:24 +0000 UTC (1 container statuses recorded) Apr 16 00:11:24.444: INFO: Container example ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-73e1af98-bea8-4597-bf30-5311fbe96b80 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-73e1af98-bea8-4597-bf30-5311fbe96b80 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-73e1af98-bea8-4597-bf30-5311fbe96b80 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:11:32.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9693" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.507 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":123,"skipped":2150,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:11:32.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-64907db6-e431-47d4-bbfb-0d6cc9b1dc10 STEP: Creating secret with name secret-projected-all-test-volume-e12f2037-99c4-4c0b-8da8-ebc4488dce51 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 16 00:11:32.932: INFO: Waiting up to 5m0s for pod "projected-volume-ca0bc6f1-2bb9-4382-bb0d-64919764dfcf" in namespace "projected-5129" to be "Succeeded or Failed" Apr 16 00:11:32.937: INFO: Pod "projected-volume-ca0bc6f1-2bb9-4382-bb0d-64919764dfcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.600318ms Apr 16 00:11:34.939: INFO: Pod "projected-volume-ca0bc6f1-2bb9-4382-bb0d-64919764dfcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006981695s Apr 16 00:11:36.944: INFO: Pod "projected-volume-ca0bc6f1-2bb9-4382-bb0d-64919764dfcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011876781s STEP: Saw pod success Apr 16 00:11:36.944: INFO: Pod "projected-volume-ca0bc6f1-2bb9-4382-bb0d-64919764dfcf" satisfied condition "Succeeded or Failed" Apr 16 00:11:36.947: INFO: Trying to get logs from node latest-worker pod projected-volume-ca0bc6f1-2bb9-4382-bb0d-64919764dfcf container projected-all-volume-test: STEP: delete the pod Apr 16 00:11:36.965: INFO: Waiting for pod projected-volume-ca0bc6f1-2bb9-4382-bb0d-64919764dfcf to disappear Apr 16 00:11:36.969: INFO: Pod projected-volume-ca0bc6f1-2bb9-4382-bb0d-64919764dfcf no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:11:36.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5129" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2181,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:11:36.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:11:37.055: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:11:41.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-304" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2195,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:11:41.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:11:41.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1662" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":126,"skipped":2206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:11:41.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 00:11:41.625: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 00:11:43.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592701, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592701, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592701, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722592701, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 00:11:46.661: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:11:46.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6311-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:11:47.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9598" for this suite. STEP: Destroying namespace "webhook-9598-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.745 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":127,"skipped":2232,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:11:47.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 16 00:11:48.020: INFO: Waiting up to 5m0s for pod "pod-ac1d5454-3b8a-4090-b0cf-7e0d0c23415a" in namespace "emptydir-7308" to be "Succeeded or Failed" Apr 16 00:11:48.024: INFO: Pod "pod-ac1d5454-3b8a-4090-b0cf-7e0d0c23415a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172631ms Apr 16 00:11:50.139: INFO: Pod "pod-ac1d5454-3b8a-4090-b0cf-7e0d0c23415a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119152011s Apr 16 00:11:52.144: INFO: Pod "pod-ac1d5454-3b8a-4090-b0cf-7e0d0c23415a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123610785s STEP: Saw pod success Apr 16 00:11:52.144: INFO: Pod "pod-ac1d5454-3b8a-4090-b0cf-7e0d0c23415a" satisfied condition "Succeeded or Failed" Apr 16 00:11:52.147: INFO: Trying to get logs from node latest-worker pod pod-ac1d5454-3b8a-4090-b0cf-7e0d0c23415a container test-container: STEP: delete the pod Apr 16 00:11:52.163: INFO: Waiting for pod pod-ac1d5454-3b8a-4090-b0cf-7e0d0c23415a to disappear Apr 16 00:11:52.168: INFO: Pod pod-ac1d5454-3b8a-4090-b0cf-7e0d0c23415a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:11:52.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7308" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2251,"failed":0} SSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:11:52.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:11:52.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2531" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":129,"skipped":2257,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:11:52.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1636.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1636.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1636.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1636.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1636.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1636.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 16 00:11:58.436: INFO: DNS probes using dns-1636/dns-test-d19af6b4-bf66-4da4-84b6-d6b7f00c91b6 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:11:58.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1636" for this suite. • [SLOW TEST:6.184 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":130,"skipped":2273,"failed":0} [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:11:58.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 16 00:12:08.818: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-599 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:12:08.818: INFO: >>> kubeConfig: /root/.kube/config I0416 00:12:08.848150 7 log.go:172] (0xc002d284d0) (0xc0017a8820) Create stream I0416 00:12:08.848176 7 log.go:172] (0xc002d284d0) (0xc0017a8820) Stream added, broadcasting: 1 I0416 00:12:08.850007 7 log.go:172] (0xc002d284d0) Reply frame received for 1 I0416 00:12:08.850038 7 log.go:172] (0xc002d284d0) (0xc002164280) Create stream I0416 00:12:08.850047 7 log.go:172] (0xc002d284d0) (0xc002164280) Stream added, broadcasting: 3 I0416 00:12:08.850817 7 log.go:172] (0xc002d284d0) Reply frame received for 3 I0416 00:12:08.850845 7 log.go:172] (0xc002d284d0) (0xc00272d0e0) Create stream I0416 00:12:08.850855 7 log.go:172] (0xc002d284d0) (0xc00272d0e0) Stream added, broadcasting: 5 I0416 00:12:08.851819 7 log.go:172] (0xc002d284d0) Reply frame received for 5 I0416 00:12:08.934763 7 log.go:172] (0xc002d284d0) Data frame received for 3 I0416 00:12:08.934833 7 log.go:172] (0xc002164280) (3) Data frame handling I0416 00:12:08.934858 7 log.go:172] (0xc002164280) (3) Data frame sent I0416 00:12:08.934875 7 log.go:172] (0xc002d284d0) Data frame received for 3 I0416 00:12:08.934893 7 log.go:172] (0xc002164280) (3) Data frame handling I0416 00:12:08.934925 7 log.go:172] (0xc002d284d0) Data frame received for 5 I0416 00:12:08.934976 7 log.go:172] (0xc00272d0e0) (5) Data frame handling I0416 00:12:08.936638 7 log.go:172] (0xc002d284d0) Data frame received for 1 I0416 00:12:08.936666 7 log.go:172] (0xc0017a8820) (1) Data frame handling I0416 00:12:08.936690 7 log.go:172] (0xc0017a8820) (1) Data frame sent I0416 00:12:08.936744 7 log.go:172] (0xc002d284d0) (0xc0017a8820) Stream removed, broadcasting: 1 I0416 00:12:08.936777 7 log.go:172] (0xc002d284d0) Go away received I0416 00:12:08.936876 7 log.go:172] (0xc002d284d0) (0xc0017a8820) Stream removed, broadcasting: 1 I0416 00:12:08.936898 7 log.go:172] (0xc002d284d0) (0xc002164280) Stream removed, broadcasting: 3 I0416 00:12:08.936908 7 log.go:172] (0xc002d284d0) (0xc00272d0e0) Stream removed, broadcasting: 5 Apr 16 00:12:08.936: INFO: Exec stderr: "" Apr 16 00:12:08.936: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-599 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:12:08.936: INFO: >>> kubeConfig: /root/.kube/config I0416 00:12:08.963413 7 log.go:172] (0xc002d28bb0) (0xc0017a8aa0) Create stream I0416 00:12:08.963451 7 log.go:172] (0xc002d28bb0) (0xc0017a8aa0) Stream added, broadcasting: 1 I0416 00:12:08.965656 7 log.go:172] (0xc002d28bb0) Reply frame received for 1 I0416 00:12:08.965688 7 log.go:172] (0xc002d28bb0) (0xc0023d2000) Create stream I0416 00:12:08.965704 7 log.go:172] (0xc002d28bb0) (0xc0023d2000) Stream added, broadcasting: 3 I0416 00:12:08.966618 7 log.go:172] (0xc002d28bb0) Reply frame received for 3 I0416 00:12:08.966653 7 log.go:172] (0xc002d28bb0) (0xc00272d180) Create stream I0416 00:12:08.966668 7 log.go:172] (0xc002d28bb0) (0xc00272d180) Stream added, broadcasting: 5 I0416 00:12:08.967745 7 log.go:172] (0xc002d28bb0) Reply frame received for 5 I0416 00:12:09.031834 7 log.go:172] (0xc002d28bb0) Data frame received for 5 I0416 00:12:09.031905 7 log.go:172] (0xc00272d180) (5) Data frame handling I0416 00:12:09.031943 7 log.go:172] (0xc002d28bb0) Data frame received for 3 I0416 00:12:09.032032 7 log.go:172] (0xc0023d2000) (3) Data frame handling I0416 00:12:09.032068 7 log.go:172] (0xc0023d2000) (3) Data frame sent I0416 00:12:09.032280 7 log.go:172] (0xc002d28bb0) Data frame received for 3 I0416 00:12:09.032303 7 log.go:172] (0xc0023d2000) (3) Data frame handling I0416 00:12:09.033998 7 log.go:172] (0xc002d28bb0) Data frame received for 1 I0416 00:12:09.034035 7 log.go:172] (0xc0017a8aa0) (1) Data frame handling I0416 00:12:09.034085 7 log.go:172] (0xc0017a8aa0) (1) Data frame sent I0416 00:12:09.034117 7 log.go:172] (0xc002d28bb0) (0xc0017a8aa0) Stream removed, broadcasting: 1 I0416 00:12:09.034213 7 log.go:172] (0xc002d28bb0) Go away received I0416 00:12:09.034261 7 log.go:172] (0xc002d28bb0) (0xc0017a8aa0) Stream removed, broadcasting: 1 I0416 00:12:09.034294 7 log.go:172] (0xc002d28bb0) (0xc0023d2000) Stream removed, broadcasting: 3 I0416 00:12:09.034317 7 log.go:172] (0xc002d28bb0) (0xc00272d180) Stream removed, broadcasting: 5 Apr 16 00:12:09.034: INFO: Exec stderr: "" Apr 16 00:12:09.034: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-599 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:12:09.034: INFO: >>> kubeConfig: /root/.kube/config I0416 00:12:09.064250 7 log.go:172] (0xc002b360b0) (0xc00272d400) Create stream I0416 00:12:09.064284 7 log.go:172] (0xc002b360b0) (0xc00272d400) Stream added, broadcasting: 1 I0416 00:12:09.066815 7 log.go:172] (0xc002b360b0) Reply frame received for 1 I0416 00:12:09.066856 7 log.go:172] (0xc002b360b0) (0xc0011472c0) Create stream I0416 00:12:09.066871 7 log.go:172] (0xc002b360b0) (0xc0011472c0) Stream added, broadcasting: 3 I0416 00:12:09.068015 7 log.go:172] (0xc002b360b0) Reply frame received for 3 I0416 00:12:09.068075 7 log.go:172] (0xc002b360b0) (0xc0017a8d20) Create stream I0416 00:12:09.068097 7 log.go:172] (0xc002b360b0) (0xc0017a8d20) Stream added, broadcasting: 5 I0416 00:12:09.069522 7 log.go:172] (0xc002b360b0) Reply frame received for 5 I0416 00:12:09.138781 7 log.go:172] (0xc002b360b0) Data frame received for 5 I0416 00:12:09.138832 7 log.go:172] (0xc0017a8d20) (5) Data frame handling I0416 00:12:09.138871 7 log.go:172] (0xc002b360b0) Data frame received for 3 I0416 00:12:09.138890 7 log.go:172] (0xc0011472c0) (3) Data frame handling I0416 00:12:09.138925 7 log.go:172] (0xc0011472c0) (3) Data frame sent I0416 00:12:09.138952 7 log.go:172] (0xc002b360b0) Data frame received for 3 I0416 00:12:09.138971 7 log.go:172] (0xc0011472c0) (3) Data frame handling I0416 00:12:09.140573 7 log.go:172] (0xc002b360b0) Data frame received for 1 I0416 00:12:09.140599 7 log.go:172] (0xc00272d400) (1) Data frame handling I0416 00:12:09.140622 7 log.go:172] (0xc00272d400) (1) Data frame sent I0416 00:12:09.140807 7 log.go:172] (0xc002b360b0) (0xc00272d400) Stream removed, broadcasting: 1 I0416 00:12:09.140845 7 log.go:172] (0xc002b360b0) Go away received I0416 00:12:09.140982 7 log.go:172] (0xc002b360b0) (0xc00272d400) Stream removed, broadcasting: 1 I0416 00:12:09.141018 7 log.go:172] (0xc002b360b0) (0xc0011472c0) Stream removed, broadcasting: 3 I0416 00:12:09.141030 7 log.go:172] (0xc002b360b0) (0xc0017a8d20) Stream removed, broadcasting: 5 Apr 16 00:12:09.141: INFO: Exec stderr: "" Apr 16 00:12:09.141: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-599 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:12:09.141: INFO: >>> kubeConfig: /root/.kube/config I0416 00:12:09.168837 7 log.go:172] (0xc002ce48f0) (0xc001147c20) Create stream I0416 00:12:09.168871 7 log.go:172] (0xc002ce48f0) (0xc001147c20) Stream added, broadcasting: 1 I0416 00:12:09.171292 7 log.go:172] (0xc002ce48f0) Reply frame received for 1 I0416 00:12:09.171329 7 log.go:172] (0xc002ce48f0) (0xc00272d4a0) Create stream I0416 00:12:09.171349 7 log.go:172] (0xc002ce48f0) (0xc00272d4a0) Stream added, broadcasting: 3 I0416 00:12:09.172179 7 log.go:172] (0xc002ce48f0) Reply frame received for 3 I0416 00:12:09.172246 7 log.go:172] (0xc002ce48f0) (0xc0017a8f00) Create stream I0416 00:12:09.172260 7 log.go:172] (0xc002ce48f0) (0xc0017a8f00) Stream added, broadcasting: 5 I0416 00:12:09.173283 7 log.go:172] (0xc002ce48f0) Reply frame received for 5 I0416 00:12:09.238720 7 log.go:172] (0xc002ce48f0) Data frame received for 5 I0416 00:12:09.238760 7 log.go:172] (0xc0017a8f00) (5) Data frame handling I0416 00:12:09.238787 7 log.go:172] (0xc002ce48f0) Data frame received for 3 I0416 00:12:09.238834 7 log.go:172] (0xc00272d4a0) (3) Data frame handling I0416 00:12:09.238862 7 log.go:172] (0xc00272d4a0) (3) Data frame sent I0416 00:12:09.238877 7 log.go:172] (0xc002ce48f0) Data frame received for 3 I0416 00:12:09.238890 7 log.go:172] (0xc00272d4a0) (3) Data frame handling I0416 00:12:09.239884 7 log.go:172] (0xc002ce48f0) Data frame received for 1 I0416 00:12:09.239907 7 log.go:172] (0xc001147c20) (1) Data frame handling I0416 00:12:09.239920 7 log.go:172] (0xc001147c20) (1) Data frame sent I0416 00:12:09.239933 7 log.go:172] (0xc002ce48f0) (0xc001147c20) Stream removed, broadcasting: 1 I0416 00:12:09.239987 7 log.go:172] (0xc002ce48f0) Go away received I0416 00:12:09.240017 7 log.go:172] (0xc002ce48f0) (0xc001147c20) Stream removed, broadcasting: 1 I0416 00:12:09.240035 7 log.go:172] (0xc002ce48f0) (0xc00272d4a0) Stream removed, broadcasting: 3 I0416 00:12:09.240043 7 log.go:172] (0xc002ce48f0) (0xc0017a8f00) Stream removed, broadcasting: 5 Apr 16 00:12:09.240: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 16 00:12:09.240: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-599 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:12:09.240: INFO: >>> kubeConfig: /root/.kube/config I0416 00:12:09.302482 7 log.go:172] (0xc002ce4fd0) (0xc001ee60a0) Create stream I0416 00:12:09.302511 7 log.go:172] (0xc002ce4fd0) (0xc001ee60a0) Stream added, broadcasting: 1 I0416 00:12:09.305301 7 log.go:172] (0xc002ce4fd0) Reply frame received for 1 I0416 00:12:09.305372 7 log.go:172] (0xc002ce4fd0) (0xc0021643c0) Create stream I0416 00:12:09.305392 7 log.go:172] (0xc002ce4fd0) (0xc0021643c0) Stream added, broadcasting: 3 I0416 00:12:09.306320 7 log.go:172] (0xc002ce4fd0) Reply frame received for 3 I0416 00:12:09.306344 7 log.go:172] (0xc002ce4fd0) (0xc001ee6140) Create stream I0416 00:12:09.306352 7 log.go:172] (0xc002ce4fd0) (0xc001ee6140) Stream added, broadcasting: 5 I0416 00:12:09.307250 7 log.go:172] (0xc002ce4fd0) Reply frame received for 5 I0416 00:12:09.373106 7 log.go:172] (0xc002ce4fd0) Data frame received for 3 I0416 00:12:09.373265 7 log.go:172] (0xc0021643c0) (3) Data frame handling I0416 00:12:09.373309 7 log.go:172] (0xc0021643c0) (3) Data frame sent I0416 00:12:09.373333 7 log.go:172] (0xc002ce4fd0) Data frame received for 3 I0416 00:12:09.373346 7 log.go:172] (0xc0021643c0) (3) Data frame handling I0416 00:12:09.373382 7 log.go:172] (0xc002ce4fd0) Data frame received for 5 I0416 00:12:09.373420 7 log.go:172] (0xc001ee6140) (5) Data frame handling I0416 00:12:09.375240 7 log.go:172] (0xc002ce4fd0) Data frame received for 1 I0416 00:12:09.375274 7 log.go:172] (0xc001ee60a0) (1) Data frame handling I0416 00:12:09.375333 7 log.go:172] (0xc001ee60a0) (1) Data frame sent I0416 00:12:09.375361 7 log.go:172] (0xc002ce4fd0) (0xc001ee60a0) Stream removed, broadcasting: 1 I0416 00:12:09.375378 7 log.go:172] (0xc002ce4fd0) Go away received I0416 00:12:09.375520 7 log.go:172] (0xc002ce4fd0) (0xc001ee60a0) Stream removed, broadcasting: 1 I0416 00:12:09.375538 7 log.go:172] (0xc002ce4fd0) (0xc0021643c0) Stream removed, broadcasting: 3 I0416 00:12:09.375546 7 log.go:172] (0xc002ce4fd0) (0xc001ee6140) Stream removed, broadcasting: 5 Apr 16 00:12:09.375: INFO: Exec stderr: "" Apr 16 00:12:09.375: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-599 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:12:09.375: INFO: >>> kubeConfig: /root/.kube/config I0416 00:12:09.415839 7 log.go:172] (0xc002d291e0) (0xc0017a94a0) Create stream I0416 00:12:09.415880 7 log.go:172] (0xc002d291e0) (0xc0017a94a0) Stream added, broadcasting: 1 I0416 00:12:09.426097 7 log.go:172] (0xc002d291e0) Reply frame received for 1 I0416 00:12:09.426167 7 log.go:172] (0xc002d291e0) (0xc00272d5e0) Create stream I0416 00:12:09.426226 7 log.go:172] (0xc002d291e0) (0xc00272d5e0) Stream added, broadcasting: 3 I0416 00:12:09.439645 7 log.go:172] (0xc002d291e0) Reply frame received for 3 I0416 00:12:09.439703 7 log.go:172] (0xc002d291e0) (0xc001ee61e0) Create stream I0416 00:12:09.439715 7 log.go:172] (0xc002d291e0) (0xc001ee61e0) Stream added, broadcasting: 5 I0416 00:12:09.443936 7 log.go:172] (0xc002d291e0) Reply frame received for 5 I0416 00:12:09.481043 7 log.go:172] (0xc002d291e0) Data frame received for 3 I0416 00:12:09.481066 7 log.go:172] (0xc00272d5e0) (3) Data frame handling I0416 00:12:09.481075 7 log.go:172] (0xc00272d5e0) (3) Data frame sent I0416 00:12:09.481079 7 log.go:172] (0xc002d291e0) Data frame received for 3 I0416 00:12:09.481083 7 log.go:172] (0xc00272d5e0) (3) Data frame handling I0416 00:12:09.481258 7 log.go:172] (0xc002d291e0) Data frame received for 5 I0416 00:12:09.481292 7 log.go:172] (0xc001ee61e0) (5) Data frame handling I0416 00:12:09.482696 7 log.go:172] (0xc002d291e0) Data frame received for 1 I0416 00:12:09.482710 7 log.go:172] (0xc0017a94a0) (1) Data frame handling I0416 00:12:09.482724 7 log.go:172] (0xc0017a94a0) (1) Data frame sent I0416 00:12:09.482732 7 log.go:172] (0xc002d291e0) (0xc0017a94a0) Stream removed, broadcasting: 1 I0416 00:12:09.482764 7 log.go:172] (0xc002d291e0) Go away received I0416 00:12:09.482826 7 log.go:172] (0xc002d291e0) (0xc0017a94a0) Stream removed, broadcasting: 1 I0416 00:12:09.482842 7 log.go:172] (0xc002d291e0) (0xc00272d5e0) Stream removed, broadcasting: 3 I0416 00:12:09.482853 7 log.go:172] (0xc002d291e0) (0xc001ee61e0) Stream removed, broadcasting: 5 Apr 16 00:12:09.482: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 16 00:12:09.482: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-599 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:12:09.482: INFO: >>> kubeConfig: /root/.kube/config I0416 00:12:09.515697 7 log.go:172] (0xc001456370) (0xc0021646e0) Create stream I0416 00:12:09.515729 7 log.go:172] (0xc001456370) (0xc0021646e0) Stream added, broadcasting: 1 I0416 00:12:09.518009 7 log.go:172] (0xc001456370) Reply frame received for 1 I0416 00:12:09.518043 7 log.go:172] (0xc001456370) (0xc00272d680) Create stream I0416 00:12:09.518054 7 log.go:172] (0xc001456370) (0xc00272d680) Stream added, broadcasting: 3 I0416 00:12:09.518924 7 log.go:172] (0xc001456370) Reply frame received for 3 I0416 00:12:09.518962 7 log.go:172] (0xc001456370) (0xc00272d720) Create stream I0416 00:12:09.518984 7 log.go:172] (0xc001456370) (0xc00272d720) Stream added, broadcasting: 5 I0416 00:12:09.519841 7 log.go:172] (0xc001456370) Reply frame received for 5 I0416 00:12:09.582780 7 log.go:172] (0xc001456370) Data frame received for 5 I0416 00:12:09.582843 7 log.go:172] (0xc00272d720) (5) Data frame handling I0416 00:12:09.582896 7 log.go:172] (0xc001456370) Data frame received for 3 I0416 00:12:09.582937 7 log.go:172] (0xc00272d680) (3) Data frame handling I0416 00:12:09.582976 7 log.go:172] (0xc00272d680) (3) Data frame sent I0416 00:12:09.582993 7 log.go:172] (0xc001456370) Data frame received for 3 I0416 00:12:09.583006 7 log.go:172] (0xc00272d680) (3) Data frame handling I0416 00:12:09.584669 7 log.go:172] (0xc001456370) Data frame received for 1 I0416 00:12:09.584703 7 log.go:172] (0xc0021646e0) (1) Data frame handling I0416 00:12:09.584723 7 log.go:172] (0xc0021646e0) (1) Data frame sent I0416 00:12:09.584761 7 log.go:172] (0xc001456370) (0xc0021646e0) Stream removed, broadcasting: 1 I0416 00:12:09.584834 7 log.go:172] (0xc001456370) Go away received I0416 00:12:09.584877 7 log.go:172] (0xc001456370) (0xc0021646e0) Stream removed, broadcasting: 1 I0416 00:12:09.584921 7 log.go:172] (0xc001456370) (0xc00272d680) Stream removed, broadcasting: 3 I0416 00:12:09.584938 7 log.go:172] (0xc001456370) (0xc00272d720) Stream removed, broadcasting: 5 Apr 16 00:12:09.584: INFO: Exec stderr: "" Apr 16 00:12:09.584: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-599 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:12:09.585: INFO: >>> kubeConfig: /root/.kube/config I0416 00:12:09.620561 7 log.go:172] (0xc002b369a0) (0xc00272d860) Create stream I0416 00:12:09.620588 7 log.go:172] (0xc002b369a0) (0xc00272d860) Stream added, broadcasting: 1 I0416 00:12:09.622725 7 log.go:172] (0xc002b369a0) Reply frame received for 1 I0416 00:12:09.622768 7 log.go:172] (0xc002b369a0) (0xc002164780) Create stream I0416 00:12:09.622818 7 log.go:172] (0xc002b369a0) (0xc002164780) Stream added, broadcasting: 3 I0416 00:12:09.623593 7 log.go:172] (0xc002b369a0) Reply frame received for 3 I0416 00:12:09.623629 7 log.go:172] (0xc002b369a0) (0xc002164820) Create stream I0416 00:12:09.623642 7 log.go:172] (0xc002b369a0) (0xc002164820) Stream added, broadcasting: 5 I0416 00:12:09.624460 7 log.go:172] (0xc002b369a0) Reply frame received for 5 I0416 00:12:09.683049 7 log.go:172] (0xc002b369a0) Data frame received for 5 I0416 00:12:09.683073 7 log.go:172] (0xc002164820) (5) Data frame handling I0416 00:12:09.683091 7 log.go:172] (0xc002b369a0) Data frame received for 3 I0416 00:12:09.683101 7 log.go:172] (0xc002164780) (3) Data frame handling I0416 00:12:09.683109 7 log.go:172] (0xc002164780) (3) Data frame sent I0416 00:12:09.683120 7 log.go:172] (0xc002b369a0) Data frame received for 3 I0416 00:12:09.683127 7 log.go:172] (0xc002164780) (3) Data frame handling I0416 00:12:09.684356 7 log.go:172] (0xc002b369a0) Data frame received for 1 I0416 00:12:09.684386 7 log.go:172] (0xc00272d860) (1) Data frame handling I0416 00:12:09.684400 7 log.go:172] (0xc00272d860) (1) Data frame sent I0416 00:12:09.684417 7 log.go:172] (0xc002b369a0) (0xc00272d860) Stream removed, broadcasting: 1 I0416 00:12:09.684432 7 log.go:172] (0xc002b369a0) Go away received I0416 00:12:09.684532 7 log.go:172] (0xc002b369a0) (0xc00272d860) Stream removed, broadcasting: 1 I0416 00:12:09.684551 7 log.go:172] (0xc002b369a0) (0xc002164780) Stream removed, broadcasting: 3 I0416 00:12:09.684564 7 log.go:172] (0xc002b369a0) (0xc002164820) Stream removed, broadcasting: 5 Apr 16 00:12:09.684: INFO: Exec stderr: "" Apr 16 00:12:09.684: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-599 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:12:09.684: INFO: >>> kubeConfig: /root/.kube/config I0416 00:12:09.712643 7 log.go:172] (0xc002b37130) (0xc00272db80) Create stream I0416 00:12:09.712666 7 log.go:172] (0xc002b37130) (0xc00272db80) Stream added, broadcasting: 1 I0416 00:12:09.714587 7 log.go:172] (0xc002b37130) Reply frame received for 1 I0416 00:12:09.714637 7 log.go:172] (0xc002b37130) (0xc0017a9680) Create stream I0416 00:12:09.714652 7 log.go:172] (0xc002b37130) (0xc0017a9680) Stream added, broadcasting: 3 I0416 00:12:09.715540 7 log.go:172] (0xc002b37130) Reply frame received for 3 I0416 00:12:09.715583 7 log.go:172] (0xc002b37130) (0xc0017a97c0) Create stream I0416 00:12:09.715598 7 log.go:172] (0xc002b37130) (0xc0017a97c0) Stream added, broadcasting: 5 I0416 00:12:09.716420 7 log.go:172] (0xc002b37130) Reply frame received for 5 I0416 00:12:09.775911 7 log.go:172] (0xc002b37130) Data frame received for 3 I0416 00:12:09.775948 7 log.go:172] (0xc0017a9680) (3) Data frame handling I0416 00:12:09.775968 7 log.go:172] (0xc0017a9680) (3) Data frame sent I0416 00:12:09.775979 7 log.go:172] (0xc002b37130) Data frame received for 3 I0416 00:12:09.775990 7 log.go:172] (0xc0017a9680) (3) Data frame handling I0416 00:12:09.776030 7 log.go:172] (0xc002b37130) Data frame received for 5 I0416 00:12:09.776052 7 log.go:172] (0xc0017a97c0) (5) Data frame handling I0416 00:12:09.777438 7 log.go:172] (0xc002b37130) Data frame received for 1 I0416 00:12:09.777461 7 log.go:172] (0xc00272db80) (1) Data frame handling I0416 00:12:09.777477 7 log.go:172] (0xc00272db80) (1) Data frame sent I0416 00:12:09.777532 7 log.go:172] (0xc002b37130) (0xc00272db80) Stream removed, broadcasting: 1 I0416 00:12:09.777652 7 log.go:172] (0xc002b37130) (0xc00272db80) Stream removed, broadcasting: 1 I0416 00:12:09.777668 7 log.go:172] (0xc002b37130) (0xc0017a9680) Stream removed, broadcasting: 3 I0416 00:12:09.777875 7 log.go:172] (0xc002b37130) (0xc0017a97c0) Stream removed, broadcasting: 5 Apr 16 00:12:09.777: INFO: Exec stderr: "" Apr 16 00:12:09.777: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-599 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:12:09.777: INFO: >>> kubeConfig: /root/.kube/config I0416 00:12:09.777986 7 log.go:172] (0xc002b37130) Go away received I0416 00:12:09.813830 7 log.go:172] (0xc002b37760) (0xc00272dd60) Create stream I0416 00:12:09.813862 7 log.go:172] (0xc002b37760) (0xc00272dd60) Stream added, broadcasting: 1 I0416 00:12:09.815927 7 log.go:172] (0xc002b37760) Reply frame received for 1 I0416 00:12:09.815972 7 log.go:172] (0xc002b37760) (0xc001ee6460) Create stream I0416 00:12:09.815984 7 log.go:172] (0xc002b37760) (0xc001ee6460) Stream added, broadcasting: 3 I0416 00:12:09.816836 7 log.go:172] (0xc002b37760) Reply frame received for 3 I0416 00:12:09.816877 7 log.go:172] (0xc002b37760) (0xc0023d21e0) Create stream I0416 00:12:09.816891 7 log.go:172] (0xc002b37760) (0xc0023d21e0) Stream added, broadcasting: 5 I0416 00:12:09.817836 7 log.go:172] (0xc002b37760) Reply frame received for 5 I0416 00:12:09.872914 7 log.go:172] (0xc002b37760) Data frame received for 5 I0416 00:12:09.872950 7 log.go:172] (0xc0023d21e0) (5) Data frame handling I0416 00:12:09.872982 7 log.go:172] (0xc002b37760) Data frame received for 3 I0416 00:12:09.872996 7 log.go:172] (0xc001ee6460) (3) Data frame handling I0416 00:12:09.873009 7 log.go:172] (0xc001ee6460) (3) Data frame sent I0416 00:12:09.873020 7 log.go:172] (0xc002b37760) Data frame received for 3 I0416 00:12:09.873029 7 log.go:172] (0xc001ee6460) (3) Data frame handling I0416 00:12:09.874064 7 log.go:172] (0xc002b37760) Data frame received for 1 I0416 00:12:09.874091 7 log.go:172] (0xc00272dd60) (1) Data frame handling I0416 00:12:09.874099 7 log.go:172] (0xc00272dd60) (1) Data frame sent I0416 00:12:09.874108 7 log.go:172] (0xc002b37760) (0xc00272dd60) Stream removed, broadcasting: 1 I0416 00:12:09.874154 7 log.go:172] (0xc002b37760) Go away received I0416 00:12:09.874178 7 log.go:172] (0xc002b37760) (0xc00272dd60) Stream removed, broadcasting: 1 I0416 00:12:09.874281 7 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc001ee6460), 0x5:(*spdystream.Stream)(0xc0023d21e0)} I0416 00:12:09.874321 7 log.go:172] (0xc002b37760) (0xc001ee6460) Stream removed, broadcasting: 3 I0416 00:12:09.874341 7 log.go:172] (0xc002b37760) (0xc0023d21e0) Stream removed, broadcasting: 5 Apr 16 00:12:09.874: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:12:09.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-599" for this suite. • [SLOW TEST:11.374 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2273,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:12:09.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 16 00:12:09.991: INFO: Waiting up to 5m0s for pod "pod-efcb015a-b682-4f37-8e98-580fbd2d63c4" in namespace "emptydir-7125" to be "Succeeded or Failed" Apr 16 00:12:10.001: INFO: Pod "pod-efcb015a-b682-4f37-8e98-580fbd2d63c4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031784ms Apr 16 00:12:12.005: INFO: Pod "pod-efcb015a-b682-4f37-8e98-580fbd2d63c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014138737s Apr 16 00:12:14.010: INFO: Pod "pod-efcb015a-b682-4f37-8e98-580fbd2d63c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018708077s STEP: Saw pod success Apr 16 00:12:14.010: INFO: Pod "pod-efcb015a-b682-4f37-8e98-580fbd2d63c4" satisfied condition "Succeeded or Failed" Apr 16 00:12:14.013: INFO: Trying to get logs from node latest-worker2 pod pod-efcb015a-b682-4f37-8e98-580fbd2d63c4 container test-container: STEP: delete the pod Apr 16 00:12:14.044: INFO: Waiting for pod pod-efcb015a-b682-4f37-8e98-580fbd2d63c4 to disappear Apr 16 00:12:14.055: INFO: Pod pod-efcb015a-b682-4f37-8e98-580fbd2d63c4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:12:14.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7125" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2283,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:12:14.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 00:12:14.111: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a995a035-7627-4da7-a23c-b63b310add45" in namespace "downward-api-7530" to be "Succeeded or Failed" Apr 16 00:12:14.118: INFO: Pod "downwardapi-volume-a995a035-7627-4da7-a23c-b63b310add45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09326ms Apr 16 00:12:16.122: INFO: Pod "downwardapi-volume-a995a035-7627-4da7-a23c-b63b310add45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010366437s Apr 16 00:12:18.126: INFO: Pod "downwardapi-volume-a995a035-7627-4da7-a23c-b63b310add45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014436115s STEP: Saw pod success Apr 16 00:12:18.126: INFO: Pod "downwardapi-volume-a995a035-7627-4da7-a23c-b63b310add45" satisfied condition "Succeeded or Failed" Apr 16 00:12:18.129: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a995a035-7627-4da7-a23c-b63b310add45 container client-container: STEP: delete the pod Apr 16 00:12:18.143: INFO: Waiting for pod downwardapi-volume-a995a035-7627-4da7-a23c-b63b310add45 to disappear Apr 16 00:12:18.193: INFO: Pod downwardapi-volume-a995a035-7627-4da7-a23c-b63b310add45 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:12:18.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7530" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2313,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:12:18.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:12:29.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3530" for this suite. • [SLOW TEST:11.085 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":134,"skipped":2321,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:12:29.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:12:45.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1769" for this suite. • [SLOW TEST:16.374 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":135,"skipped":2342,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:12:45.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 16 00:12:54.338: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 16 00:12:54.346: INFO: Pod pod-with-poststart-exec-hook still exists Apr 16 00:12:56.346: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 16 00:12:56.350: INFO: Pod pod-with-poststart-exec-hook still exists Apr 16 00:12:58.346: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 16 00:12:58.350: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:12:58.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5777" for this suite. • [SLOW TEST:12.695 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2358,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:12:58.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 16 00:13:02.499: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:13:02.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9658" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2367,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:13:02.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-53d6016d-b156-4cb1-a901-0ab4a6a9117f Apr 16 00:13:02.631: INFO: Pod name my-hostname-basic-53d6016d-b156-4cb1-a901-0ab4a6a9117f: Found 0 pods out of 1 Apr 16 00:13:07.639: INFO: Pod name my-hostname-basic-53d6016d-b156-4cb1-a901-0ab4a6a9117f: Found 1 pods out of 1 Apr 16 00:13:07.639: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-53d6016d-b156-4cb1-a901-0ab4a6a9117f" are running Apr 16 00:13:07.651: INFO: Pod "my-hostname-basic-53d6016d-b156-4cb1-a901-0ab4a6a9117f-b6gnp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-16 00:13:02 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-16 00:13:06 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-16 00:13:06 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-16 00:13:02 +0000 UTC Reason: Message:}]) Apr 16 00:13:07.651: INFO: Trying to dial the pod Apr 16 00:13:12.663: INFO: Controller my-hostname-basic-53d6016d-b156-4cb1-a901-0ab4a6a9117f: Got expected result from replica 1 [my-hostname-basic-53d6016d-b156-4cb1-a901-0ab4a6a9117f-b6gnp]: "my-hostname-basic-53d6016d-b156-4cb1-a901-0ab4a6a9117f-b6gnp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:13:12.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2884" for this suite. • [SLOW TEST:10.126 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":138,"skipped":2369,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:13:12.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-e3263fb7-a03b-4433-8c6d-a7b74ef48ca0 STEP: Creating a pod to test consume secrets Apr 16 00:13:12.790: INFO: Waiting up to 5m0s for pod "pod-secrets-a049db8b-eb90-409c-8554-a2dc0f678df3" in namespace "secrets-6210" to be "Succeeded or Failed" Apr 16 00:13:12.808: INFO: Pod "pod-secrets-a049db8b-eb90-409c-8554-a2dc0f678df3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.41077ms Apr 16 00:13:14.841: INFO: Pod "pod-secrets-a049db8b-eb90-409c-8554-a2dc0f678df3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051357317s Apr 16 00:13:16.845: INFO: Pod "pod-secrets-a049db8b-eb90-409c-8554-a2dc0f678df3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055713006s STEP: Saw pod success Apr 16 00:13:16.845: INFO: Pod "pod-secrets-a049db8b-eb90-409c-8554-a2dc0f678df3" satisfied condition "Succeeded or Failed" Apr 16 00:13:16.849: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a049db8b-eb90-409c-8554-a2dc0f678df3 container secret-env-test: STEP: delete the pod Apr 16 00:13:16.868: INFO: Waiting for pod pod-secrets-a049db8b-eb90-409c-8554-a2dc0f678df3 to disappear Apr 16 00:13:16.891: INFO: Pod pod-secrets-a049db8b-eb90-409c-8554-a2dc0f678df3 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:13:16.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6210" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2393,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:13:16.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:14:17.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4233" for this suite. • [SLOW TEST:60.098 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2413,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:14:17.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 16 00:14:17.074: INFO: Waiting up to 5m0s for pod "downward-api-cee67c7c-3fc6-4236-8960-60fb174e8345" in namespace "downward-api-8470" to be "Succeeded or Failed" Apr 16 00:14:17.078: INFO: Pod "downward-api-cee67c7c-3fc6-4236-8960-60fb174e8345": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05995ms Apr 16 00:14:19.082: INFO: Pod "downward-api-cee67c7c-3fc6-4236-8960-60fb174e8345": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008068505s Apr 16 00:14:21.087: INFO: Pod "downward-api-cee67c7c-3fc6-4236-8960-60fb174e8345": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012673365s STEP: Saw pod success Apr 16 00:14:21.087: INFO: Pod "downward-api-cee67c7c-3fc6-4236-8960-60fb174e8345" satisfied condition "Succeeded or Failed" Apr 16 00:14:21.090: INFO: Trying to get logs from node latest-worker pod downward-api-cee67c7c-3fc6-4236-8960-60fb174e8345 container dapi-container: STEP: delete the pod Apr 16 00:14:21.113: INFO: Waiting for pod downward-api-cee67c7c-3fc6-4236-8960-60fb174e8345 to disappear Apr 16 00:14:21.118: INFO: Pod downward-api-cee67c7c-3fc6-4236-8960-60fb174e8345 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:14:21.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8470" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2420,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:14:21.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 16 00:14:21.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 16 00:14:21.396: INFO: stderr: "" Apr 16 00:14:21.396: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:14:21.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4884" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":142,"skipped":2435,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:14:21.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:14:28.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1878" for this suite. • [SLOW TEST:7.117 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":143,"skipped":2439,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:14:28.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-50025c8c-0d52-4ec2-a49f-dbeeb5bac25f STEP: Creating a pod to test consume configMaps Apr 16 00:14:28.588: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c87019aa-039e-45f2-aa5a-9caeec539a8a" in namespace "projected-2086" to be "Succeeded or Failed" Apr 16 00:14:28.592: INFO: Pod "pod-projected-configmaps-c87019aa-039e-45f2-aa5a-9caeec539a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.563324ms Apr 16 00:14:30.605: INFO: Pod "pod-projected-configmaps-c87019aa-039e-45f2-aa5a-9caeec539a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017415561s Apr 16 00:14:32.608: INFO: Pod "pod-projected-configmaps-c87019aa-039e-45f2-aa5a-9caeec539a8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020229492s STEP: Saw pod success Apr 16 00:14:32.608: INFO: Pod "pod-projected-configmaps-c87019aa-039e-45f2-aa5a-9caeec539a8a" satisfied condition "Succeeded or Failed" Apr 16 00:14:32.610: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-c87019aa-039e-45f2-aa5a-9caeec539a8a container projected-configmap-volume-test: STEP: delete the pod Apr 16 00:14:32.624: INFO: Waiting for pod pod-projected-configmaps-c87019aa-039e-45f2-aa5a-9caeec539a8a to disappear Apr 16 00:14:32.638: INFO: Pod pod-projected-configmaps-c87019aa-039e-45f2-aa5a-9caeec539a8a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:14:32.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2086" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2441,"failed":0} S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:14:32.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 16 00:14:37.305: INFO: Successfully updated pod "annotationupdate1fab731e-edd5-48ee-a9b0-c6a56f893342" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:14:41.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9367" for this suite. • [SLOW TEST:8.642 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2442,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:14:41.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-8756 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8756 STEP: Deleting pre-stop pod Apr 16 00:14:54.467: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:14:54.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8756" for this suite. • [SLOW TEST:13.160 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":146,"skipped":2451,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:14:54.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 00:14:54.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-022812da-c4ef-4906-824a-b4d286b1aa52" in namespace "projected-2482" to be "Succeeded or Failed" Apr 16 00:14:54.589: INFO: Pod "downwardapi-volume-022812da-c4ef-4906-824a-b4d286b1aa52": Phase="Pending", Reason="", readiness=false. Elapsed: 13.531041ms Apr 16 00:14:56.593: INFO: Pod "downwardapi-volume-022812da-c4ef-4906-824a-b4d286b1aa52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017681574s Apr 16 00:14:58.598: INFO: Pod "downwardapi-volume-022812da-c4ef-4906-824a-b4d286b1aa52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021766052s STEP: Saw pod success Apr 16 00:14:58.598: INFO: Pod "downwardapi-volume-022812da-c4ef-4906-824a-b4d286b1aa52" satisfied condition "Succeeded or Failed" Apr 16 00:14:58.600: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-022812da-c4ef-4906-824a-b4d286b1aa52 container client-container: STEP: delete the pod Apr 16 00:14:58.622: INFO: Waiting for pod downwardapi-volume-022812da-c4ef-4906-824a-b4d286b1aa52 to disappear Apr 16 00:14:58.668: INFO: Pod downwardapi-volume-022812da-c4ef-4906-824a-b4d286b1aa52 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:14:58.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2482" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2468,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:14:58.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:15:02.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1783" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:15:02.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-4332 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 16 00:15:02.919: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 16 00:15:02.954: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 16 00:15:04.957: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 16 00:15:06.958: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:15:08.958: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:15:10.958: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:15:12.958: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:15:14.957: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:15:16.958: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:15:18.958: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:15:20.958: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 16 00:15:20.963: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 16 00:15:24.983: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.34:8080/dial?request=hostname&protocol=udp&host=10.244.2.33&port=8081&tries=1'] Namespace:pod-network-test-4332 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:15:24.983: INFO: >>> kubeConfig: /root/.kube/config I0416 00:15:25.008713 7 log.go:172] (0xc002d28bb0) (0xc001167ae0) Create stream I0416 00:15:25.008738 7 log.go:172] (0xc002d28bb0) (0xc001167ae0) Stream added, broadcasting: 1 I0416 00:15:25.010482 7 log.go:172] (0xc002d28bb0) Reply frame received for 1 I0416 00:15:25.010512 7 log.go:172] (0xc002d28bb0) (0xc00229a000) Create stream I0416 00:15:25.010522 7 log.go:172] (0xc002d28bb0) (0xc00229a000) Stream added, broadcasting: 3 I0416 00:15:25.011281 7 log.go:172] (0xc002d28bb0) Reply frame received for 3 I0416 00:15:25.011308 7 log.go:172] (0xc002d28bb0) (0xc00222c000) Create stream I0416 00:15:25.011329 7 log.go:172] (0xc002d28bb0) (0xc00222c000) Stream added, broadcasting: 5 I0416 00:15:25.012114 7 log.go:172] (0xc002d28bb0) Reply frame received for 5 I0416 00:15:25.087281 7 log.go:172] (0xc002d28bb0) Data frame received for 3 I0416 00:15:25.087310 7 log.go:172] (0xc00229a000) (3) Data frame handling I0416 00:15:25.087333 7 log.go:172] (0xc00229a000) (3) Data frame sent I0416 00:15:25.087850 7 log.go:172] (0xc002d28bb0) Data frame received for 5 I0416 00:15:25.087943 7 log.go:172] (0xc00222c000) (5) Data frame handling I0416 00:15:25.088220 7 log.go:172] (0xc002d28bb0) Data frame received for 3 I0416 00:15:25.088242 7 log.go:172] (0xc00229a000) (3) Data frame handling I0416 00:15:25.089978 7 log.go:172] (0xc002d28bb0) Data frame received for 1 I0416 00:15:25.090054 7 log.go:172] (0xc001167ae0) (1) Data frame handling I0416 00:15:25.090095 7 log.go:172] (0xc001167ae0) (1) Data frame sent I0416 00:15:25.090131 7 log.go:172] (0xc002d28bb0) (0xc001167ae0) Stream removed, broadcasting: 1 I0416 00:15:25.090160 7 log.go:172] (0xc002d28bb0) Go away received I0416 00:15:25.090324 7 log.go:172] (0xc002d28bb0) (0xc001167ae0) Stream removed, broadcasting: 1 I0416 00:15:25.090362 7 log.go:172] (0xc002d28bb0) (0xc00229a000) Stream removed, broadcasting: 3 I0416 00:15:25.090394 7 log.go:172] (0xc002d28bb0) (0xc00222c000) Stream removed, broadcasting: 5 Apr 16 00:15:25.090: INFO: Waiting for responses: map[] Apr 16 00:15:25.093: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.34:8080/dial?request=hostname&protocol=udp&host=10.244.1.3&port=8081&tries=1'] Namespace:pod-network-test-4332 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:15:25.093: INFO: >>> kubeConfig: /root/.kube/config I0416 00:15:25.129712 7 log.go:172] (0xc002ce4630) (0xc002164500) Create stream I0416 00:15:25.129743 7 log.go:172] (0xc002ce4630) (0xc002164500) Stream added, broadcasting: 1 I0416 00:15:25.131542 7 log.go:172] (0xc002ce4630) Reply frame received for 1 I0416 00:15:25.131567 7 log.go:172] (0xc002ce4630) (0xc00222c1e0) Create stream I0416 00:15:25.131576 7 log.go:172] (0xc002ce4630) (0xc00222c1e0) Stream added, broadcasting: 3 I0416 00:15:25.132665 7 log.go:172] (0xc002ce4630) Reply frame received for 3 I0416 00:15:25.132711 7 log.go:172] (0xc002ce4630) (0xc0010daa00) Create stream I0416 00:15:25.132727 7 log.go:172] (0xc002ce4630) (0xc0010daa00) Stream added, broadcasting: 5 I0416 00:15:25.133650 7 log.go:172] (0xc002ce4630) Reply frame received for 5 I0416 00:15:25.199501 7 log.go:172] (0xc002ce4630) Data frame received for 3 I0416 00:15:25.199540 7 log.go:172] (0xc00222c1e0) (3) Data frame handling I0416 00:15:25.199565 7 log.go:172] (0xc00222c1e0) (3) Data frame sent I0416 00:15:25.200100 7 log.go:172] (0xc002ce4630) Data frame received for 3 I0416 00:15:25.200132 7 log.go:172] (0xc00222c1e0) (3) Data frame handling I0416 00:15:25.200158 7 log.go:172] (0xc002ce4630) Data frame received for 5 I0416 00:15:25.200171 7 log.go:172] (0xc0010daa00) (5) Data frame handling I0416 00:15:25.202444 7 log.go:172] (0xc002ce4630) Data frame received for 1 I0416 00:15:25.202489 7 log.go:172] (0xc002164500) (1) Data frame handling I0416 00:15:25.202540 7 log.go:172] (0xc002164500) (1) Data frame sent I0416 00:15:25.202569 7 log.go:172] (0xc002ce4630) (0xc002164500) Stream removed, broadcasting: 1 I0416 00:15:25.202626 7 log.go:172] (0xc002ce4630) Go away received I0416 00:15:25.202766 7 log.go:172] (0xc002ce4630) (0xc002164500) Stream removed, broadcasting: 1 I0416 00:15:25.202793 7 log.go:172] (0xc002ce4630) (0xc00222c1e0) Stream removed, broadcasting: 3 I0416 00:15:25.202810 7 log.go:172] (0xc002ce4630) (0xc0010daa00) Stream removed, broadcasting: 5 Apr 16 00:15:25.202: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:15:25.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4332" for this suite. • [SLOW TEST:22.375 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:15:25.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0416 00:15:36.964135 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 16 00:15:36.964: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:15:36.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1715" for this suite. • [SLOW TEST:11.817 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":150,"skipped":2601,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:15:37.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 16 00:15:47.466: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 16 00:15:47.501: INFO: Pod pod-with-prestop-exec-hook still exists Apr 16 00:15:49.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 16 00:15:49.506: INFO: Pod pod-with-prestop-exec-hook still exists Apr 16 00:15:51.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 16 00:15:51.505: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:15:51.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3225" for this suite. • [SLOW TEST:14.490 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:15:51.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:16:04.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7710" for this suite. • [SLOW TEST:13.199 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":152,"skipped":2650,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:16:04.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:16:20.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5457" for this suite. • [SLOW TEST:16.114 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":153,"skipped":2653,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:16:20.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1622.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1622.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1622.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1622.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1622.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1622.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 16 00:16:26.969: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:26.972: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:26.974: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:26.976: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:26.985: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:26.987: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:26.990: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:26.992: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:26.997: INFO: Lookups using dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local] Apr 16 00:16:32.002: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:32.006: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:32.010: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:32.014: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:32.024: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:32.028: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:32.031: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:32.034: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:32.041: INFO: Lookups using dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local] Apr 16 00:16:37.002: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:37.006: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:37.009: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:37.012: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:37.022: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:37.025: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:37.028: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:37.031: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:37.038: INFO: Lookups using dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local] Apr 16 00:16:42.003: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:42.006: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:42.010: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:42.012: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:42.020: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:42.023: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:42.026: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:42.029: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:42.035: INFO: Lookups using dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local] Apr 16 00:16:47.002: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:47.005: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:47.009: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:47.012: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:47.021: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:47.024: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:47.027: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:47.030: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:47.036: INFO: Lookups using dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local] Apr 16 00:16:52.002: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:52.005: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:52.008: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:52.010: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:52.018: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:52.021: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:52.024: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:52.026: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local from pod dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590: the server could not find the requested resource (get pods dns-test-a2fa356c-34f1-432d-823e-1f47e410b590) Apr 16 00:16:52.031: INFO: Lookups using dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1622.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1622.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1622.svc.cluster.local jessie_udp@dns-test-service-2.dns-1622.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1622.svc.cluster.local] Apr 16 00:16:57.037: INFO: DNS probes using dns-1622/dns-test-a2fa356c-34f1-432d-823e-1f47e410b590 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:16:57.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1622" for this suite. • [SLOW TEST:36.713 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":154,"skipped":2659,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:16:57.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:16:57.699: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 16 00:16:57.780: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 16 00:17:02.787: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 16 00:17:02.787: INFO: Creating deployment "test-rolling-update-deployment" Apr 16 00:17:02.791: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 16 00:17:02.796: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 16 00:17:04.802: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 16 00:17:04.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593022, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593022, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593022, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593022, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 00:17:06.808: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 16 00:17:06.816: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9231 /apis/apps/v1/namespaces/deployment-9231/deployments/test-rolling-update-deployment cd4960a7-0dec-4a1a-83ad-28f185de93eb 8405068 1 2020-04-16 00:17:02 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00416efa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-16 00:17:02 +0000 UTC,LastTransitionTime:2020-04-16 00:17:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-16 00:17:05 +0000 UTC,LastTransitionTime:2020-04-16 00:17:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 16 00:17:06.820: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-9231 /apis/apps/v1/namespaces/deployment-9231/replicasets/test-rolling-update-deployment-664dd8fc7f 5be3bca5-d242-4e8a-bdf3-b026bff1899c 8405057 1 2020-04-16 00:17:02 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment cd4960a7-0dec-4a1a-83ad-28f185de93eb 0xc00416f4c7 0xc00416f4c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00416f558 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 16 00:17:06.820: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 16 00:17:06.820: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9231 /apis/apps/v1/namespaces/deployment-9231/replicasets/test-rolling-update-controller 5585bf0b-f9a5-403e-84a9-332633d58a96 8405066 2 2020-04-16 00:16:57 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment cd4960a7-0dec-4a1a-83ad-28f185de93eb 0xc00416f3f7 0xc00416f3f8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00416f458 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 16 00:17:06.823: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-sgl49" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-sgl49 test-rolling-update-deployment-664dd8fc7f- deployment-9231 /api/v1/namespaces/deployment-9231/pods/test-rolling-update-deployment-664dd8fc7f-sgl49 7244c5a9-3d48-43d7-b871-108b1133d3f8 8405056 0 2020-04-16 00:17:02 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 5be3bca5-d242-4e8a-bdf3-b026bff1899c 0xc00416fa47 0xc00416fa48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qppzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qppzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qppzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:17:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:17:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:17:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:17:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.11,StartTime:2020-04-16 00:17:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-16 00:17:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://7dd3862e13ba87e09fe358d1e8df7984600140240e1f075a59bfcdc8446991e6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:17:06.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9231" for this suite. • [SLOW TEST:9.282 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":155,"skipped":2720,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:17:06.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-f231bc1d-b7f4-44c6-9b54-0467666d6b89 in namespace container-probe-3780 Apr 16 00:17:10.888: INFO: Started pod test-webserver-f231bc1d-b7f4-44c6-9b54-0467666d6b89 in namespace container-probe-3780 STEP: checking the pod's current state and verifying that restartCount is present Apr 16 00:17:10.891: INFO: Initial restart count of pod test-webserver-f231bc1d-b7f4-44c6-9b54-0467666d6b89 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:21:11.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3780" for this suite. • [SLOW TEST:244.746 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:21:11.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 16 00:21:11.626: INFO: Waiting up to 5m0s for pod "pod-a9694470-87cd-4751-905b-20ad9e367bf7" in namespace "emptydir-3668" to be "Succeeded or Failed" Apr 16 00:21:11.630: INFO: Pod "pod-a9694470-87cd-4751-905b-20ad9e367bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.906518ms Apr 16 00:21:14.560: INFO: Pod "pod-a9694470-87cd-4751-905b-20ad9e367bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.933737609s Apr 16 00:21:16.565: INFO: Pod "pod-a9694470-87cd-4751-905b-20ad9e367bf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.938103178s STEP: Saw pod success Apr 16 00:21:16.565: INFO: Pod "pod-a9694470-87cd-4751-905b-20ad9e367bf7" satisfied condition "Succeeded or Failed" Apr 16 00:21:16.568: INFO: Trying to get logs from node latest-worker pod pod-a9694470-87cd-4751-905b-20ad9e367bf7 container test-container: STEP: delete the pod Apr 16 00:21:16.619: INFO: Waiting for pod pod-a9694470-87cd-4751-905b-20ad9e367bf7 to disappear Apr 16 00:21:16.631: INFO: Pod pod-a9694470-87cd-4751-905b-20ad9e367bf7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:21:16.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3668" for this suite. • [SLOW TEST:5.060 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:21:16.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:21:46.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7687" for this suite. • [SLOW TEST:29.601 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2786,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:21:46.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 16 00:21:50.342: INFO: &Pod{ObjectMeta:{send-events-38d6fd97-5423-40b4-8ebe-6aa440238f11 events-5845 /api/v1/namespaces/events-5845/pods/send-events-38d6fd97-5423-40b4-8ebe-6aa440238f11 db346646-4e0e-41b3-9d0e-781239ca458a 8406031 0 2020-04-16 00:21:46 +0000 UTC map[name:foo time:322730324] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xbhcs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xbhcs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xbhcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:21:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:21:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:21:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:21:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.47,StartTime:2020-04-16 00:21:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-16 00:21:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://cc621f36f507754435051e07e251b31c767c52f211115e601537bf9cce6aa4de,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.47,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 16 00:21:52.346: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 16 00:21:54.351: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:21:54.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5845" for this suite. • [SLOW TEST:8.127 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":159,"skipped":2825,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:21:54.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 16 00:21:54.435: INFO: Waiting up to 5m0s for pod "pod-60934e96-a125-4ec1-bd5a-0acab881e19f" in namespace "emptydir-3995" to be "Succeeded or Failed" Apr 16 00:21:54.450: INFO: Pod "pod-60934e96-a125-4ec1-bd5a-0acab881e19f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.963404ms Apr 16 00:21:56.454: INFO: Pod "pod-60934e96-a125-4ec1-bd5a-0acab881e19f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019032534s Apr 16 00:21:58.458: INFO: Pod "pod-60934e96-a125-4ec1-bd5a-0acab881e19f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023357552s STEP: Saw pod success Apr 16 00:21:58.458: INFO: Pod "pod-60934e96-a125-4ec1-bd5a-0acab881e19f" satisfied condition "Succeeded or Failed" Apr 16 00:21:58.462: INFO: Trying to get logs from node latest-worker2 pod pod-60934e96-a125-4ec1-bd5a-0acab881e19f container test-container: STEP: delete the pod Apr 16 00:21:58.617: INFO: Waiting for pod pod-60934e96-a125-4ec1-bd5a-0acab881e19f to disappear Apr 16 00:21:58.631: INFO: Pod pod-60934e96-a125-4ec1-bd5a-0acab881e19f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:21:58.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3995" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2830,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:21:58.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 16 00:21:58.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6516' Apr 16 00:22:01.230: INFO: stderr: "" Apr 16 00:22:01.230: INFO: stdout: "pod/pause created\n" Apr 16 00:22:01.230: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 16 00:22:01.230: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6516" to be "running and ready" Apr 16 00:22:01.237: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.315073ms Apr 16 00:22:03.240: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009751352s Apr 16 00:22:05.244: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.013930371s Apr 16 00:22:05.244: INFO: Pod "pause" satisfied condition "running and ready" Apr 16 00:22:05.244: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 16 00:22:05.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6516' Apr 16 00:22:05.348: INFO: stderr: "" Apr 16 00:22:05.348: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 16 00:22:05.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6516' Apr 16 00:22:05.441: INFO: stderr: "" Apr 16 00:22:05.441: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 16 00:22:05.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6516' Apr 16 00:22:05.548: INFO: stderr: "" Apr 16 00:22:05.548: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 16 00:22:05.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6516' Apr 16 00:22:05.662: INFO: stderr: "" Apr 16 00:22:05.662: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 16 00:22:05.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6516' Apr 16 00:22:05.781: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 16 00:22:05.781: INFO: stdout: "pod \"pause\" force deleted\n" Apr 16 00:22:05.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6516' Apr 16 00:22:05.990: INFO: stderr: "No resources found in kubectl-6516 namespace.\n" Apr 16 00:22:05.990: INFO: stdout: "" Apr 16 00:22:05.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6516 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 16 00:22:06.084: INFO: stderr: "" Apr 16 00:22:06.084: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:22:06.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6516" for this suite. • [SLOW TEST:7.451 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":161,"skipped":2833,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:22:06.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 16 00:22:06.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3916' Apr 16 00:22:06.652: INFO: stderr: "" Apr 16 00:22:06.652: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 16 00:22:07.656: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:22:07.656: INFO: Found 0 / 1 Apr 16 00:22:08.656: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:22:08.656: INFO: Found 0 / 1 Apr 16 00:22:09.655: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:22:09.655: INFO: Found 1 / 1 Apr 16 00:22:09.655: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 16 00:22:09.668: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:22:09.668: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 16 00:22:09.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-d9sx7 --namespace=kubectl-3916 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 16 00:22:09.763: INFO: stderr: "" Apr 16 00:22:09.763: INFO: stdout: "pod/agnhost-master-d9sx7 patched\n" STEP: checking annotations Apr 16 00:22:09.765: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:22:09.765: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:22:09.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3916" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":162,"skipped":2834,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:22:09.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:22:13.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6129" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2851,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:22:13.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-05330b49-62f7-49bb-82e6-8c05a2684f48 STEP: Creating a pod to test consume configMaps Apr 16 00:22:13.974: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-33909c3a-4801-40d5-bc61-1ef5e8b0c14e" in namespace "projected-8808" to be "Succeeded or Failed" Apr 16 00:22:13.989: INFO: Pod "pod-projected-configmaps-33909c3a-4801-40d5-bc61-1ef5e8b0c14e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.997164ms Apr 16 00:22:15.994: INFO: Pod "pod-projected-configmaps-33909c3a-4801-40d5-bc61-1ef5e8b0c14e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019253332s Apr 16 00:22:17.997: INFO: Pod "pod-projected-configmaps-33909c3a-4801-40d5-bc61-1ef5e8b0c14e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022804705s STEP: Saw pod success Apr 16 00:22:17.997: INFO: Pod "pod-projected-configmaps-33909c3a-4801-40d5-bc61-1ef5e8b0c14e" satisfied condition "Succeeded or Failed" Apr 16 00:22:18.001: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-33909c3a-4801-40d5-bc61-1ef5e8b0c14e container projected-configmap-volume-test: STEP: delete the pod Apr 16 00:22:18.091: INFO: Waiting for pod pod-projected-configmaps-33909c3a-4801-40d5-bc61-1ef5e8b0c14e to disappear Apr 16 00:22:18.140: INFO: Pod pod-projected-configmaps-33909c3a-4801-40d5-bc61-1ef5e8b0c14e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:22:18.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8808" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2861,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:22:18.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 16 00:22:18.812: INFO: Pod name wrapped-volume-race-915aba13-5034-44f3-b540-bc31da6ffad6: Found 0 pods out of 5 Apr 16 00:22:23.821: INFO: Pod name wrapped-volume-race-915aba13-5034-44f3-b540-bc31da6ffad6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-915aba13-5034-44f3-b540-bc31da6ffad6 in namespace emptydir-wrapper-5755, will wait for the garbage collector to delete the pods Apr 16 00:22:37.915: INFO: Deleting ReplicationController wrapped-volume-race-915aba13-5034-44f3-b540-bc31da6ffad6 took: 8.385095ms Apr 16 00:22:38.316: INFO: Terminating ReplicationController wrapped-volume-race-915aba13-5034-44f3-b540-bc31da6ffad6 pods took: 400.273938ms STEP: Creating RC which spawns configmap-volume pods Apr 16 00:22:45.154: INFO: Pod name wrapped-volume-race-b4c77b5b-70ad-4342-ae05-71b7ef54a74e: Found 1 pods out of 5 Apr 16 00:22:50.164: INFO: Pod name wrapped-volume-race-b4c77b5b-70ad-4342-ae05-71b7ef54a74e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b4c77b5b-70ad-4342-ae05-71b7ef54a74e in namespace emptydir-wrapper-5755, will wait for the garbage collector to delete the pods Apr 16 00:23:02.248: INFO: Deleting ReplicationController wrapped-volume-race-b4c77b5b-70ad-4342-ae05-71b7ef54a74e took: 10.326601ms Apr 16 00:23:02.548: INFO: Terminating ReplicationController wrapped-volume-race-b4c77b5b-70ad-4342-ae05-71b7ef54a74e pods took: 300.281781ms STEP: Creating RC which spawns configmap-volume pods Apr 16 00:23:12.883: INFO: Pod name wrapped-volume-race-3da8e0e3-1e78-47d3-9856-b06ee2adaddc: Found 0 pods out of 5 Apr 16 00:23:17.889: INFO: Pod name wrapped-volume-race-3da8e0e3-1e78-47d3-9856-b06ee2adaddc: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3da8e0e3-1e78-47d3-9856-b06ee2adaddc in namespace emptydir-wrapper-5755, will wait for the garbage collector to delete the pods Apr 16 00:23:32.180: INFO: Deleting ReplicationController wrapped-volume-race-3da8e0e3-1e78-47d3-9856-b06ee2adaddc took: 6.558461ms Apr 16 00:23:32.581: INFO: Terminating ReplicationController wrapped-volume-race-3da8e0e3-1e78-47d3-9856-b06ee2adaddc pods took: 400.332423ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:23:43.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5755" for this suite. • [SLOW TEST:85.423 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":165,"skipped":2868,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:23:43.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 00:23:43.638: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a67e362-1a1f-49cd-8bac-c7fea1bf87ea" in namespace "projected-7266" to be "Succeeded or Failed" Apr 16 00:23:43.670: INFO: Pod "downwardapi-volume-3a67e362-1a1f-49cd-8bac-c7fea1bf87ea": Phase="Pending", Reason="", readiness=false. Elapsed: 31.453045ms Apr 16 00:23:45.678: INFO: Pod "downwardapi-volume-3a67e362-1a1f-49cd-8bac-c7fea1bf87ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039977172s Apr 16 00:23:47.682: INFO: Pod "downwardapi-volume-3a67e362-1a1f-49cd-8bac-c7fea1bf87ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043583688s STEP: Saw pod success Apr 16 00:23:47.682: INFO: Pod "downwardapi-volume-3a67e362-1a1f-49cd-8bac-c7fea1bf87ea" satisfied condition "Succeeded or Failed" Apr 16 00:23:47.685: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-3a67e362-1a1f-49cd-8bac-c7fea1bf87ea container client-container: STEP: delete the pod Apr 16 00:23:47.717: INFO: Waiting for pod downwardapi-volume-3a67e362-1a1f-49cd-8bac-c7fea1bf87ea to disappear Apr 16 00:23:47.721: INFO: Pod downwardapi-volume-3a67e362-1a1f-49cd-8bac-c7fea1bf87ea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:23:47.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7266" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2879,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:23:47.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1774 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1774 I0416 00:23:47.895949 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1774, replica count: 2 I0416 00:23:50.946400 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 00:23:53.946682 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 00:23:53.946: INFO: Creating new exec pod Apr 16 00:23:58.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1774 execpod882hk -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 16 00:23:59.206: INFO: stderr: "I0416 00:23:59.099195 2032 log.go:172] (0xc0009f0000) (0xc0007a7360) Create stream\nI0416 00:23:59.099265 2032 log.go:172] (0xc0009f0000) (0xc0007a7360) Stream added, broadcasting: 1\nI0416 00:23:59.103906 2032 log.go:172] (0xc0009f0000) Reply frame received for 1\nI0416 00:23:59.103939 2032 log.go:172] (0xc0009f0000) (0xc0007a7400) Create stream\nI0416 00:23:59.103948 2032 log.go:172] (0xc0009f0000) (0xc0007a7400) Stream added, broadcasting: 3\nI0416 00:23:59.109122 2032 log.go:172] (0xc0009f0000) Reply frame received for 3\nI0416 00:23:59.109254 2032 log.go:172] (0xc0009f0000) (0xc0003fa000) Create stream\nI0416 00:23:59.109266 2032 log.go:172] (0xc0009f0000) (0xc0003fa000) Stream added, broadcasting: 5\nI0416 00:23:59.110368 2032 log.go:172] (0xc0009f0000) Reply frame received for 5\nI0416 00:23:59.197982 2032 log.go:172] (0xc0009f0000) Data frame received for 5\nI0416 00:23:59.198019 2032 log.go:172] (0xc0003fa000) (5) Data frame handling\nI0416 00:23:59.198041 2032 log.go:172] (0xc0003fa000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0416 00:23:59.198584 2032 log.go:172] (0xc0009f0000) Data frame received for 5\nI0416 00:23:59.198619 2032 log.go:172] (0xc0003fa000) (5) Data frame handling\nI0416 00:23:59.198645 2032 log.go:172] (0xc0003fa000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0416 00:23:59.199225 2032 log.go:172] (0xc0009f0000) Data frame received for 3\nI0416 00:23:59.199256 2032 log.go:172] (0xc0007a7400) (3) Data frame handling\nI0416 00:23:59.199527 2032 log.go:172] (0xc0009f0000) Data frame received for 5\nI0416 00:23:59.199554 2032 log.go:172] (0xc0003fa000) (5) Data frame handling\nI0416 00:23:59.200881 2032 log.go:172] (0xc0009f0000) Data frame received for 1\nI0416 00:23:59.200912 2032 log.go:172] (0xc0007a7360) (1) Data frame handling\nI0416 00:23:59.200975 2032 log.go:172] (0xc0007a7360) (1) Data frame sent\nI0416 00:23:59.201391 2032 log.go:172] (0xc0009f0000) (0xc0007a7360) Stream removed, broadcasting: 1\nI0416 00:23:59.201441 2032 log.go:172] (0xc0009f0000) Go away received\nI0416 00:23:59.202041 2032 log.go:172] (0xc0009f0000) (0xc0007a7360) Stream removed, broadcasting: 1\nI0416 00:23:59.202066 2032 log.go:172] (0xc0009f0000) (0xc0007a7400) Stream removed, broadcasting: 3\nI0416 00:23:59.202078 2032 log.go:172] (0xc0009f0000) (0xc0003fa000) Stream removed, broadcasting: 5\n" Apr 16 00:23:59.206: INFO: stdout: "" Apr 16 00:23:59.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1774 execpod882hk -- /bin/sh -x -c nc -zv -t -w 2 10.96.31.209 80' Apr 16 00:23:59.400: INFO: stderr: "I0416 00:23:59.334883 2053 log.go:172] (0xc000bc6000) (0xc0009c6000) Create stream\nI0416 00:23:59.334952 2053 log.go:172] (0xc000bc6000) (0xc0009c6000) Stream added, broadcasting: 1\nI0416 00:23:59.337921 2053 log.go:172] (0xc000bc6000) Reply frame received for 1\nI0416 00:23:59.337984 2053 log.go:172] (0xc000bc6000) (0xc000576000) Create stream\nI0416 00:23:59.338018 2053 log.go:172] (0xc000bc6000) (0xc000576000) Stream added, broadcasting: 3\nI0416 00:23:59.338883 2053 log.go:172] (0xc000bc6000) Reply frame received for 3\nI0416 00:23:59.338930 2053 log.go:172] (0xc000bc6000) (0xc0005760a0) Create stream\nI0416 00:23:59.338947 2053 log.go:172] (0xc000bc6000) (0xc0005760a0) Stream added, broadcasting: 5\nI0416 00:23:59.339836 2053 log.go:172] (0xc000bc6000) Reply frame received for 5\nI0416 00:23:59.392810 2053 log.go:172] (0xc000bc6000) Data frame received for 3\nI0416 00:23:59.392854 2053 log.go:172] (0xc000576000) (3) Data frame handling\nI0416 00:23:59.392910 2053 log.go:172] (0xc000bc6000) Data frame received for 5\nI0416 00:23:59.392957 2053 log.go:172] (0xc0005760a0) (5) Data frame handling\nI0416 00:23:59.392988 2053 log.go:172] (0xc0005760a0) (5) Data frame sent\nI0416 00:23:59.393018 2053 log.go:172] (0xc000bc6000) Data frame received for 5\nI0416 00:23:59.393049 2053 log.go:172] (0xc0005760a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.31.209 80\nConnection to 10.96.31.209 80 port [tcp/http] succeeded!\nI0416 00:23:59.394738 2053 log.go:172] (0xc000bc6000) Data frame received for 1\nI0416 00:23:59.394767 2053 log.go:172] (0xc0009c6000) (1) Data frame handling\nI0416 00:23:59.394782 2053 log.go:172] (0xc0009c6000) (1) Data frame sent\nI0416 00:23:59.394825 2053 log.go:172] (0xc000bc6000) (0xc0009c6000) Stream removed, broadcasting: 1\nI0416 00:23:59.394869 2053 log.go:172] (0xc000bc6000) Go away received\nI0416 00:23:59.395282 2053 log.go:172] (0xc000bc6000) (0xc0009c6000) Stream removed, broadcasting: 1\nI0416 00:23:59.395307 2053 log.go:172] (0xc000bc6000) (0xc000576000) Stream removed, broadcasting: 3\nI0416 00:23:59.395320 2053 log.go:172] (0xc000bc6000) (0xc0005760a0) Stream removed, broadcasting: 5\n" Apr 16 00:23:59.400: INFO: stdout: "" Apr 16 00:23:59.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1774 execpod882hk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31515' Apr 16 00:23:59.629: INFO: stderr: "I0416 00:23:59.531558 2075 log.go:172] (0xc0006a09a0) (0xc00067c140) Create stream\nI0416 00:23:59.531632 2075 log.go:172] (0xc0006a09a0) (0xc00067c140) Stream added, broadcasting: 1\nI0416 00:23:59.534653 2075 log.go:172] (0xc0006a09a0) Reply frame received for 1\nI0416 00:23:59.534691 2075 log.go:172] (0xc0006a09a0) (0xc00067c1e0) Create stream\nI0416 00:23:59.534706 2075 log.go:172] (0xc0006a09a0) (0xc00067c1e0) Stream added, broadcasting: 3\nI0416 00:23:59.535574 2075 log.go:172] (0xc0006a09a0) Reply frame received for 3\nI0416 00:23:59.535614 2075 log.go:172] (0xc0006a09a0) (0xc0006e7360) Create stream\nI0416 00:23:59.535630 2075 log.go:172] (0xc0006a09a0) (0xc0006e7360) Stream added, broadcasting: 5\nI0416 00:23:59.536650 2075 log.go:172] (0xc0006a09a0) Reply frame received for 5\nI0416 00:23:59.622768 2075 log.go:172] (0xc0006a09a0) Data frame received for 3\nI0416 00:23:59.622801 2075 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0416 00:23:59.622823 2075 log.go:172] (0xc0006a09a0) Data frame received for 5\nI0416 00:23:59.622831 2075 log.go:172] (0xc0006e7360) (5) Data frame handling\nI0416 00:23:59.622845 2075 log.go:172] (0xc0006e7360) (5) Data frame sent\nI0416 00:23:59.622854 2075 log.go:172] (0xc0006a09a0) Data frame received for 5\nI0416 00:23:59.622863 2075 log.go:172] (0xc0006e7360) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31515\nConnection to 172.17.0.13 31515 port [tcp/31515] succeeded!\nI0416 00:23:59.624203 2075 log.go:172] (0xc0006a09a0) Data frame received for 1\nI0416 00:23:59.624224 2075 log.go:172] (0xc00067c140) (1) Data frame handling\nI0416 00:23:59.624244 2075 log.go:172] (0xc00067c140) (1) Data frame sent\nI0416 00:23:59.624263 2075 log.go:172] (0xc0006a09a0) (0xc00067c140) Stream removed, broadcasting: 1\nI0416 00:23:59.624285 2075 log.go:172] (0xc0006a09a0) Go away received\nI0416 00:23:59.624591 2075 log.go:172] (0xc0006a09a0) (0xc00067c140) Stream removed, broadcasting: 1\nI0416 00:23:59.624614 2075 log.go:172] (0xc0006a09a0) (0xc00067c1e0) Stream removed, broadcasting: 3\nI0416 00:23:59.624625 2075 log.go:172] (0xc0006a09a0) (0xc0006e7360) Stream removed, broadcasting: 5\n" Apr 16 00:23:59.629: INFO: stdout: "" Apr 16 00:23:59.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1774 execpod882hk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31515' Apr 16 00:23:59.846: INFO: stderr: "I0416 00:23:59.757748 2097 log.go:172] (0xc0007400b0) (0xc000954000) Create stream\nI0416 00:23:59.757815 2097 log.go:172] (0xc0007400b0) (0xc000954000) Stream added, broadcasting: 1\nI0416 00:23:59.760811 2097 log.go:172] (0xc0007400b0) Reply frame received for 1\nI0416 00:23:59.760847 2097 log.go:172] (0xc0007400b0) (0xc0009b8000) Create stream\nI0416 00:23:59.760858 2097 log.go:172] (0xc0007400b0) (0xc0009b8000) Stream added, broadcasting: 3\nI0416 00:23:59.762143 2097 log.go:172] (0xc0007400b0) Reply frame received for 3\nI0416 00:23:59.762195 2097 log.go:172] (0xc0007400b0) (0xc000697360) Create stream\nI0416 00:23:59.762212 2097 log.go:172] (0xc0007400b0) (0xc000697360) Stream added, broadcasting: 5\nI0416 00:23:59.763316 2097 log.go:172] (0xc0007400b0) Reply frame received for 5\nI0416 00:23:59.837657 2097 log.go:172] (0xc0007400b0) Data frame received for 5\nI0416 00:23:59.837696 2097 log.go:172] (0xc000697360) (5) Data frame handling\nI0416 00:23:59.837721 2097 log.go:172] (0xc000697360) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31515\nConnection to 172.17.0.12 31515 port [tcp/31515] succeeded!\nI0416 00:23:59.837897 2097 log.go:172] (0xc0007400b0) Data frame received for 5\nI0416 00:23:59.837949 2097 log.go:172] (0xc000697360) (5) Data frame handling\nI0416 00:23:59.837986 2097 log.go:172] (0xc0007400b0) Data frame received for 3\nI0416 00:23:59.838008 2097 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0416 00:23:59.839786 2097 log.go:172] (0xc0007400b0) Data frame received for 1\nI0416 00:23:59.839816 2097 log.go:172] (0xc000954000) (1) Data frame handling\nI0416 00:23:59.839835 2097 log.go:172] (0xc000954000) (1) Data frame sent\nI0416 00:23:59.839856 2097 log.go:172] (0xc0007400b0) (0xc000954000) Stream removed, broadcasting: 1\nI0416 00:23:59.839891 2097 log.go:172] (0xc0007400b0) Go away received\nI0416 00:23:59.840365 2097 log.go:172] (0xc0007400b0) (0xc000954000) Stream removed, broadcasting: 1\nI0416 00:23:59.840384 2097 log.go:172] (0xc0007400b0) (0xc0009b8000) Stream removed, broadcasting: 3\nI0416 00:23:59.840395 2097 log.go:172] (0xc0007400b0) (0xc000697360) Stream removed, broadcasting: 5\n" Apr 16 00:23:59.846: INFO: stdout: "" Apr 16 00:23:59.846: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:23:59.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1774" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.153 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":167,"skipped":2894,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:23:59.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 16 00:23:59.989: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1162 /api/v1/namespaces/watch-1162/configmaps/e2e-watch-test-watch-closed 4d97f524-6bf1-4e94-934e-07b8120a16d7 8407490 0 2020-04-16 00:23:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 16 00:23:59.989: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1162 /api/v1/namespaces/watch-1162/configmaps/e2e-watch-test-watch-closed 4d97f524-6bf1-4e94-934e-07b8120a16d7 8407491 0 2020-04-16 00:23:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 16 00:24:00.001: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1162 /api/v1/namespaces/watch-1162/configmaps/e2e-watch-test-watch-closed 4d97f524-6bf1-4e94-934e-07b8120a16d7 8407492 0 2020-04-16 00:23:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 16 00:24:00.001: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1162 /api/v1/namespaces/watch-1162/configmaps/e2e-watch-test-watch-closed 4d97f524-6bf1-4e94-934e-07b8120a16d7 8407493 0 2020-04-16 00:23:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:24:00.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1162" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":168,"skipped":2901,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:24:00.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:24:00.096: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 16 00:24:05.117: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 16 00:24:05.117: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 16 00:24:07.120: INFO: Creating deployment "test-rollover-deployment" Apr 16 00:24:07.149: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 16 00:24:09.156: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 16 00:24:09.162: INFO: Ensure that both replica sets have 1 created replica Apr 16 00:24:09.167: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 16 00:24:09.174: INFO: Updating deployment test-rollover-deployment Apr 16 00:24:09.174: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 16 00:24:11.186: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 16 00:24:11.193: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 16 00:24:11.199: INFO: all replica sets need to contain the pod-template-hash label Apr 16 00:24:11.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593449, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 00:24:13.207: INFO: all replica sets need to contain the pod-template-hash label Apr 16 00:24:13.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593451, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 00:24:15.205: INFO: all replica sets need to contain the pod-template-hash label Apr 16 00:24:15.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593451, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 00:24:17.207: INFO: all replica sets need to contain the pod-template-hash label Apr 16 00:24:17.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593451, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 00:24:19.207: INFO: all replica sets need to contain the pod-template-hash label Apr 16 00:24:19.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593451, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 00:24:21.207: INFO: all replica sets need to contain the pod-template-hash label Apr 16 00:24:21.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593451, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593447, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 00:24:23.297: INFO: Apr 16 00:24:23.297: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 16 00:24:23.305: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1845 /apis/apps/v1/namespaces/deployment-1845/deployments/test-rollover-deployment 295380d6-9186-4fff-b72d-edbf83a188a2 8407661 2 2020-04-16 00:24:07 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027bbd78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-16 00:24:07 +0000 UTC,LastTransitionTime:2020-04-16 00:24:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-16 00:24:21 +0000 UTC,LastTransitionTime:2020-04-16 00:24:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 16 00:24:23.308: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-1845 /apis/apps/v1/namespaces/deployment-1845/replicasets/test-rollover-deployment-78df7bc796 9e7eb79c-ec68-4cb3-bfec-b78d5e7ae8aa 8407650 2 2020-04-16 00:24:09 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 295380d6-9186-4fff-b72d-edbf83a188a2 0xc002d7c4e7 0xc002d7c4e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d7c558 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 16 00:24:23.308: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 16 00:24:23.308: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1845 /apis/apps/v1/namespaces/deployment-1845/replicasets/test-rollover-controller 61c448de-8881-4ddc-9a08-18e47586b024 8407659 2 2020-04-16 00:24:00 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 295380d6-9186-4fff-b72d-edbf83a188a2 0xc002d7c417 0xc002d7c418}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d7c478 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 16 00:24:23.308: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-1845 /apis/apps/v1/namespaces/deployment-1845/replicasets/test-rollover-deployment-f6c94f66c adcdbf48-9d36-4022-aa17-d9c681659a04 8407605 2 2020-04-16 00:24:07 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 295380d6-9186-4fff-b72d-edbf83a188a2 0xc002d7c5c0 0xc002d7c5c1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d7c638 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 16 00:24:23.311: INFO: Pod "test-rollover-deployment-78df7bc796-7cbwg" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-7cbwg test-rollover-deployment-78df7bc796- deployment-1845 /api/v1/namespaces/deployment-1845/pods/test-rollover-deployment-78df7bc796-7cbwg eb1c533d-7652-44a4-bdaa-76aa78717ee2 8407618 0 2020-04-16 00:24:09 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 9e7eb79c-ec68-4cb3-bfec-b78d5e7ae8aa 0xc002d7cbd7 0xc002d7cbd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2s9k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2s9k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2s9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:24:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:24:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:24:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:24:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.58,StartTime:2020-04-16 00:24:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-16 00:24:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://665f1e7383cb7d8f43b43db70f02cb00679b9d5dcbaf8648c3183e90a17bc0d9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:24:23.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1845" for this suite. • [SLOW TEST:23.310 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":169,"skipped":2922,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:24:23.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-34d08698-5e78-4a55-84f7-785f1085cb31 STEP: Creating a pod to test consume configMaps Apr 16 00:24:23.690: INFO: Waiting up to 5m0s for pod "pod-configmaps-a004f914-6e9c-488e-9b3b-051a519a33c9" in namespace "configmap-3866" to be "Succeeded or Failed" Apr 16 00:24:23.699: INFO: Pod "pod-configmaps-a004f914-6e9c-488e-9b3b-051a519a33c9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.5891ms Apr 16 00:24:25.703: INFO: Pod "pod-configmaps-a004f914-6e9c-488e-9b3b-051a519a33c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013671405s Apr 16 00:24:27.708: INFO: Pod "pod-configmaps-a004f914-6e9c-488e-9b3b-051a519a33c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018264304s STEP: Saw pod success Apr 16 00:24:27.708: INFO: Pod "pod-configmaps-a004f914-6e9c-488e-9b3b-051a519a33c9" satisfied condition "Succeeded or Failed" Apr 16 00:24:27.711: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-a004f914-6e9c-488e-9b3b-051a519a33c9 container configmap-volume-test: STEP: delete the pod Apr 16 00:24:27.728: INFO: Waiting for pod pod-configmaps-a004f914-6e9c-488e-9b3b-051a519a33c9 to disappear Apr 16 00:24:27.733: INFO: Pod pod-configmaps-a004f914-6e9c-488e-9b3b-051a519a33c9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:24:27.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3866" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2937,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:24:27.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 00:24:29.392: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 00:24:31.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593469, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593469, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593469, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593469, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 00:24:33.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593469, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593469, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593469, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593469, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 00:24:36.425: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:24:48.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5417" for this suite. STEP: Destroying namespace "webhook-5417-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.969 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":171,"skipped":2945,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:24:48.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 00:24:49.658: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 00:24:51.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593489, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593489, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593489, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722593489, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 00:24:54.699: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:24:54.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7174" for this suite. STEP: Destroying namespace "webhook-7174-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.116 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":172,"skipped":2946,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:24:54.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-1247c0a2-bab7-4f9c-9214-8ff8bcaf2e3c STEP: Creating a pod to test consume configMaps Apr 16 00:24:54.936: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d97f8a90-24b2-4c0a-ad10-cff64cac8140" in namespace "projected-8309" to be "Succeeded or Failed" Apr 16 00:24:54.940: INFO: Pod "pod-projected-configmaps-d97f8a90-24b2-4c0a-ad10-cff64cac8140": Phase="Pending", Reason="", readiness=false. Elapsed: 3.972765ms Apr 16 00:24:56.945: INFO: Pod "pod-projected-configmaps-d97f8a90-24b2-4c0a-ad10-cff64cac8140": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008170979s Apr 16 00:24:58.949: INFO: Pod "pod-projected-configmaps-d97f8a90-24b2-4c0a-ad10-cff64cac8140": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012529208s STEP: Saw pod success Apr 16 00:24:58.949: INFO: Pod "pod-projected-configmaps-d97f8a90-24b2-4c0a-ad10-cff64cac8140" satisfied condition "Succeeded or Failed" Apr 16 00:24:58.952: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-d97f8a90-24b2-4c0a-ad10-cff64cac8140 container projected-configmap-volume-test: STEP: delete the pod Apr 16 00:24:59.039: INFO: Waiting for pod pod-projected-configmaps-d97f8a90-24b2-4c0a-ad10-cff64cac8140 to disappear Apr 16 00:24:59.048: INFO: Pod pod-projected-configmaps-d97f8a90-24b2-4c0a-ad10-cff64cac8140 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:24:59.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8309" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2952,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:24:59.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 16 00:24:59.153: INFO: Waiting up to 5m0s for pod "downward-api-60a6f145-7ea1-4f17-9e1b-a824670692ab" in namespace "downward-api-7077" to be "Succeeded or Failed" Apr 16 00:24:59.155: INFO: Pod "downward-api-60a6f145-7ea1-4f17-9e1b-a824670692ab": Phase="Pending", Reason="", readiness=false. Elapsed: 1.993271ms Apr 16 00:25:01.222: INFO: Pod "downward-api-60a6f145-7ea1-4f17-9e1b-a824670692ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068959519s Apr 16 00:25:03.226: INFO: Pod "downward-api-60a6f145-7ea1-4f17-9e1b-a824670692ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073152291s STEP: Saw pod success Apr 16 00:25:03.226: INFO: Pod "downward-api-60a6f145-7ea1-4f17-9e1b-a824670692ab" satisfied condition "Succeeded or Failed" Apr 16 00:25:03.229: INFO: Trying to get logs from node latest-worker pod downward-api-60a6f145-7ea1-4f17-9e1b-a824670692ab container dapi-container: STEP: delete the pod Apr 16 00:25:03.271: INFO: Waiting for pod downward-api-60a6f145-7ea1-4f17-9e1b-a824670692ab to disappear Apr 16 00:25:03.294: INFO: Pod downward-api-60a6f145-7ea1-4f17-9e1b-a824670692ab no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:25:03.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7077" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2969,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:25:03.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9223 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-9223 Apr 16 00:25:03.383: INFO: Found 0 stateful pods, waiting for 1 Apr 16 00:25:13.387: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 16 00:25:13.411: INFO: Deleting all statefulset in ns statefulset-9223 Apr 16 00:25:13.428: INFO: Scaling statefulset ss to 0 Apr 16 00:25:33.533: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 00:25:33.536: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:25:33.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9223" for this suite. • [SLOW TEST:30.232 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":175,"skipped":2971,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:25:33.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:25:33.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3688" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":176,"skipped":2985,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:25:33.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-2x4vw in namespace proxy-9578 I0416 00:25:33.840066 7 runners.go:190] Created replication controller with name: proxy-service-2x4vw, namespace: proxy-9578, replica count: 1 I0416 00:25:34.890505 7 runners.go:190] proxy-service-2x4vw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 00:25:35.890745 7 runners.go:190] proxy-service-2x4vw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 00:25:36.890959 7 runners.go:190] proxy-service-2x4vw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 00:25:37.891180 7 runners.go:190] proxy-service-2x4vw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0416 00:25:38.891399 7 runners.go:190] proxy-service-2x4vw Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 00:25:38.922: INFO: setup took 5.1230348s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 16 00:25:38.933: INFO: (0) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 10.406604ms) Apr 16 00:25:38.933: INFO: (0) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 10.364215ms) Apr 16 00:25:38.933: INFO: (0) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 10.425714ms) Apr 16 00:25:38.933: INFO: (0) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 10.542002ms) Apr 16 00:25:38.934: INFO: (0) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 11.191066ms) Apr 16 00:25:38.934: INFO: (0) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:1080/proxy/: ... (200; 11.366349ms) Apr 16 00:25:38.935: INFO: (0) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 12.491147ms) Apr 16 00:25:38.935: INFO: (0) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:1080/proxy/: test<... (200; 12.56104ms) Apr 16 00:25:38.936: INFO: (0) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 13.473036ms) Apr 16 00:25:38.936: INFO: (0) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 13.382451ms) Apr 16 00:25:38.936: INFO: (0) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 13.692511ms) Apr 16 00:25:38.943: INFO: (0) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 20.351732ms) Apr 16 00:25:38.943: INFO: (0) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 20.595552ms) Apr 16 00:25:38.943: INFO: (0) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: ... (200; 7.669569ms) Apr 16 00:25:38.952: INFO: (1) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 8.11403ms) Apr 16 00:25:38.952: INFO: (1) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 8.084629ms) Apr 16 00:25:38.952: INFO: (1) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 8.320818ms) Apr 16 00:25:38.952: INFO: (1) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 8.475095ms) Apr 16 00:25:38.952: INFO: (1) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 8.731173ms) Apr 16 00:25:38.952: INFO: (1) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 8.718864ms) Apr 16 00:25:38.952: INFO: (1) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 8.759696ms) Apr 16 00:25:38.953: INFO: (1) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:1080/proxy/: test<... (200; 9.066421ms) Apr 16 00:25:38.953: INFO: (1) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 9.50071ms) Apr 16 00:25:38.953: INFO: (1) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 9.535796ms) Apr 16 00:25:38.953: INFO: (1) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 9.671483ms) Apr 16 00:25:38.953: INFO: (1) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 9.788646ms) Apr 16 00:25:38.953: INFO: (1) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 9.688233ms) Apr 16 00:25:38.953: INFO: (1) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: ... (200; 11.833882ms) Apr 16 00:25:38.965: INFO: (2) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 11.88516ms) Apr 16 00:25:38.965: INFO: (2) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 12.036166ms) Apr 16 00:25:38.965: INFO: (2) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 11.916368ms) Apr 16 00:25:38.965: INFO: (2) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 11.954266ms) Apr 16 00:25:38.965: INFO: (2) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:1080/proxy/: test<... (200; 12.049299ms) Apr 16 00:25:38.965: INFO: (2) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 12.108663ms) Apr 16 00:25:38.966: INFO: (2) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 12.181122ms) Apr 16 00:25:38.966: INFO: (2) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 12.23137ms) Apr 16 00:25:38.966: INFO: (2) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 12.236734ms) Apr 16 00:25:38.966: INFO: (2) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 12.241299ms) Apr 16 00:25:38.966: INFO: (2) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 12.312443ms) Apr 16 00:25:38.966: INFO: (2) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 12.357583ms) Apr 16 00:25:38.966: INFO: (2) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 12.284832ms) Apr 16 00:25:38.969: INFO: (3) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:1080/proxy/: ... (200; 3.093236ms) Apr 16 00:25:38.970: INFO: (3) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 3.914994ms) Apr 16 00:25:38.970: INFO: (3) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 3.319713ms) Apr 16 00:25:38.970: INFO: (3) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: test (200; 3.613692ms) Apr 16 00:25:38.970: INFO: (3) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:1080/proxy/: test<... (200; 3.236743ms) Apr 16 00:25:38.971: INFO: (3) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 3.915989ms) Apr 16 00:25:38.971: INFO: (3) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 4.545253ms) Apr 16 00:25:38.971: INFO: (3) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 4.695233ms) Apr 16 00:25:38.971: INFO: (3) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 4.629142ms) Apr 16 00:25:38.971: INFO: (3) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 4.225965ms) Apr 16 00:25:38.971: INFO: (3) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 5.150755ms) Apr 16 00:25:38.971: INFO: (3) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 5.099574ms) Apr 16 00:25:38.971: INFO: (3) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 5.138473ms) Apr 16 00:25:38.975: INFO: (4) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 3.605461ms) Apr 16 00:25:38.975: INFO: (4) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 3.88795ms) Apr 16 00:25:38.975: INFO: (4) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:1080/proxy/: test<... (200; 4.097843ms) Apr 16 00:25:38.975: INFO: (4) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 4.041057ms) Apr 16 00:25:38.975: INFO: (4) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 4.014298ms) Apr 16 00:25:38.975: INFO: (4) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 4.225139ms) Apr 16 00:25:38.975: INFO: (4) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: ... (200; 5.270414ms) Apr 16 00:25:38.977: INFO: (4) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 5.33155ms) Apr 16 00:25:38.977: INFO: (4) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 5.343545ms) Apr 16 00:25:38.977: INFO: (4) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 5.316428ms) Apr 16 00:25:38.977: INFO: (4) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 5.609862ms) Apr 16 00:25:38.981: INFO: (5) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 3.877232ms) Apr 16 00:25:38.982: INFO: (5) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 5.008616ms) Apr 16 00:25:38.982: INFO: (5) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:1080/proxy/: ... (200; 4.997543ms) Apr 16 00:25:38.982: INFO: (5) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: test<... (200; 5.412255ms) Apr 16 00:25:38.982: INFO: (5) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 5.407971ms) Apr 16 00:25:38.982: INFO: (5) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 5.341962ms) Apr 16 00:25:38.982: INFO: (5) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 5.397275ms) Apr 16 00:25:38.983: INFO: (5) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 6.35777ms) Apr 16 00:25:38.983: INFO: (5) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 6.400004ms) Apr 16 00:25:38.987: INFO: (6) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 3.871595ms) Apr 16 00:25:38.989: INFO: (6) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 5.367758ms) Apr 16 00:25:38.989: INFO: (6) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: test<... (200; 5.768173ms) Apr 16 00:25:38.989: INFO: (6) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 5.515422ms) Apr 16 00:25:38.989: INFO: (6) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 5.862245ms) Apr 16 00:25:38.989: INFO: (6) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 5.784963ms) Apr 16 00:25:38.989: INFO: (6) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 5.611505ms) Apr 16 00:25:38.989: INFO: (6) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:1080/proxy/: ... (200; 5.86535ms) Apr 16 00:25:38.989: INFO: (6) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 5.750292ms) Apr 16 00:25:38.990: INFO: (6) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 5.880078ms) Apr 16 00:25:38.990: INFO: (6) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 5.957442ms) Apr 16 00:25:38.990: INFO: (6) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 6.261873ms) Apr 16 00:25:38.990: INFO: (6) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 6.637434ms) Apr 16 00:25:38.991: INFO: (6) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 6.846922ms) Apr 16 00:25:38.993: INFO: (7) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 2.566698ms) Apr 16 00:25:38.996: INFO: (7) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 4.794003ms) Apr 16 00:25:38.996: INFO: (7) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 5.06289ms) Apr 16 00:25:38.996: INFO: (7) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 5.033915ms) Apr 16 00:25:38.996: INFO: (7) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 4.994164ms) Apr 16 00:25:38.996: INFO: (7) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:1080/proxy/: test<... (200; 5.009931ms) Apr 16 00:25:38.996: INFO: (7) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 5.328532ms) Apr 16 00:25:38.996: INFO: (7) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 5.422824ms) Apr 16 00:25:38.996: INFO: (7) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 5.402266ms) Apr 16 00:25:38.996: INFO: (7) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 5.470589ms) Apr 16 00:25:38.996: INFO: (7) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 5.439752ms) Apr 16 00:25:38.996: INFO: (7) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 5.562647ms) Apr 16 00:25:38.996: INFO: (7) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 5.680047ms) Apr 16 00:25:38.997: INFO: (7) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:1080/proxy/: ... (200; 5.94035ms) Apr 16 00:25:38.997: INFO: (7) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 6.185569ms) Apr 16 00:25:38.997: INFO: (7) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: test<... (200; 4.107375ms) Apr 16 00:25:39.001: INFO: (8) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 3.612521ms) Apr 16 00:25:39.001: INFO: (8) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 4.027385ms) Apr 16 00:25:39.002: INFO: (8) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:1080/proxy/: ... (200; 3.7907ms) Apr 16 00:25:39.002: INFO: (8) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 4.19237ms) Apr 16 00:25:39.002: INFO: (8) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 3.206148ms) Apr 16 00:25:39.002: INFO: (8) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 3.72278ms) Apr 16 00:25:39.002: INFO: (8) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 3.362037ms) Apr 16 00:25:39.002: INFO: (8) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: ... (200; 3.940849ms) Apr 16 00:25:39.006: INFO: (9) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 3.965526ms) Apr 16 00:25:39.006: INFO: (9) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 3.920481ms) Apr 16 00:25:39.006: INFO: (9) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 4.118314ms) Apr 16 00:25:39.006: INFO: (9) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:1080/proxy/: test<... (200; 4.032644ms) Apr 16 00:25:39.007: INFO: (9) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 4.221306ms) Apr 16 00:25:39.008: INFO: (9) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 5.738965ms) Apr 16 00:25:39.008: INFO: (9) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 5.6025ms) Apr 16 00:25:39.008: INFO: (9) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 5.690715ms) Apr 16 00:25:39.008: INFO: (9) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 5.750827ms) Apr 16 00:25:39.008: INFO: (9) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 5.718073ms) Apr 16 00:25:39.008: INFO: (9) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 5.814285ms) Apr 16 00:25:39.008: INFO: (9) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 5.779454ms) Apr 16 00:25:39.010: INFO: (10) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 2.138502ms) Apr 16 00:25:39.011: INFO: (10) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 2.125152ms) Apr 16 00:25:39.011: INFO: (10) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:1080/proxy/: test<... (200; 2.330446ms) Apr 16 00:25:39.012: INFO: (10) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 2.710314ms) Apr 16 00:25:39.012: INFO: (10) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 2.803617ms) Apr 16 00:25:39.012: INFO: (10) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: ... (200; 3.211284ms) Apr 16 00:25:39.012: INFO: (10) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 3.513684ms) Apr 16 00:25:39.013: INFO: (10) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 4.079489ms) Apr 16 00:25:39.013: INFO: (10) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 4.466027ms) Apr 16 00:25:39.013: INFO: (10) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 4.148407ms) Apr 16 00:25:39.013: INFO: (10) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 4.708903ms) Apr 16 00:25:39.013: INFO: (10) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 5.035578ms) Apr 16 00:25:39.013: INFO: (10) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 4.345126ms) Apr 16 00:25:39.013: INFO: (10) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 4.85563ms) Apr 16 00:25:39.013: INFO: (10) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 4.584241ms) Apr 16 00:25:39.016: INFO: (11) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 2.479418ms) Apr 16 00:25:39.016: INFO: (11) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:1080/proxy/: ... (200; 2.618185ms) Apr 16 00:25:39.016: INFO: (11) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: test<... (200; 3.769195ms) Apr 16 00:25:39.017: INFO: (11) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 3.827408ms) Apr 16 00:25:39.017: INFO: (11) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 3.801596ms) Apr 16 00:25:39.017: INFO: (11) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 4.0105ms) Apr 16 00:25:39.017: INFO: (11) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 4.0239ms) Apr 16 00:25:39.017: INFO: (11) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 4.100663ms) Apr 16 00:25:39.017: INFO: (11) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 4.022653ms) Apr 16 00:25:39.017: INFO: (11) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 4.10493ms) Apr 16 00:25:39.018: INFO: (11) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 4.227352ms) Apr 16 00:25:39.023: INFO: (12) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 5.063682ms) Apr 16 00:25:39.023: INFO: (12) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: test<... (200; 5.297285ms) Apr 16 00:25:39.024: INFO: (12) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 6.221588ms) Apr 16 00:25:39.049: INFO: (12) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 31.282088ms) Apr 16 00:25:39.049: INFO: (12) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 31.229457ms) Apr 16 00:25:39.049: INFO: (12) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 31.711077ms) Apr 16 00:25:39.050: INFO: (12) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 32.058087ms) Apr 16 00:25:39.050: INFO: (12) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 32.312088ms) Apr 16 00:25:39.050: INFO: (12) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 32.47028ms) Apr 16 00:25:39.050: INFO: (12) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:1080/proxy/: ... (200; 32.501022ms) Apr 16 00:25:39.050: INFO: (12) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 32.506548ms) Apr 16 00:25:39.050: INFO: (12) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 32.575026ms) Apr 16 00:25:39.050: INFO: (12) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 32.593921ms) Apr 16 00:25:39.050: INFO: (12) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 32.649312ms) Apr 16 00:25:39.051: INFO: (12) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 32.697267ms) Apr 16 00:25:39.055: INFO: (13) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 4.16025ms) Apr 16 00:25:39.055: INFO: (13) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: test (200; 4.763394ms) Apr 16 00:25:39.055: INFO: (13) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 4.762124ms) Apr 16 00:25:39.055: INFO: (13) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 4.767468ms) Apr 16 00:25:39.055: INFO: (13) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 4.784313ms) Apr 16 00:25:39.056: INFO: (13) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 4.779778ms) Apr 16 00:25:39.056: INFO: (13) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 4.827621ms) Apr 16 00:25:39.056: INFO: (13) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:1080/proxy/: test<... (200; 5.568565ms) Apr 16 00:25:39.057: INFO: (13) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 6.484743ms) Apr 16 00:25:39.057: INFO: (13) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:1080/proxy/: ... (200; 6.547967ms) Apr 16 00:25:39.057: INFO: (13) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 6.686522ms) Apr 16 00:25:39.057: INFO: (13) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 6.64746ms) Apr 16 00:25:39.058: INFO: (13) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 6.849418ms) Apr 16 00:25:39.058: INFO: (13) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 6.948145ms) Apr 16 00:25:39.058: INFO: (13) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 6.971038ms) Apr 16 00:25:39.061: INFO: (14) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 3.430939ms) Apr 16 00:25:39.062: INFO: (14) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 4.226844ms) Apr 16 00:25:39.062: INFO: (14) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 4.454106ms) Apr 16 00:25:39.062: INFO: (14) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:1080/proxy/: ... (200; 4.556307ms) Apr 16 00:25:39.062: INFO: (14) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 4.489875ms) Apr 16 00:25:39.062: INFO: (14) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 4.469444ms) Apr 16 00:25:39.062: INFO: (14) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 4.633508ms) Apr 16 00:25:39.062: INFO: (14) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: test<... (200; 4.587664ms) Apr 16 00:25:39.063: INFO: (14) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 5.219997ms) Apr 16 00:25:39.063: INFO: (14) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 5.184873ms) Apr 16 00:25:39.063: INFO: (14) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 5.253659ms) Apr 16 00:25:39.063: INFO: (14) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 5.640962ms) Apr 16 00:25:39.063: INFO: (14) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 5.56345ms) Apr 16 00:25:39.063: INFO: (14) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 5.626528ms) Apr 16 00:25:39.066: INFO: (15) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 2.97849ms) Apr 16 00:25:39.067: INFO: (15) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 3.200348ms) Apr 16 00:25:39.067: INFO: (15) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:1080/proxy/: test<... (200; 3.559692ms) Apr 16 00:25:39.068: INFO: (15) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 4.970408ms) Apr 16 00:25:39.068: INFO: (15) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: ... (200; 5.590379ms) Apr 16 00:25:39.069: INFO: (15) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 5.777749ms) Apr 16 00:25:39.071: INFO: (16) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:1080/proxy/: ... (200; 1.893446ms) Apr 16 00:25:39.071: INFO: (16) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: test<... (200; 3.47282ms) Apr 16 00:25:39.073: INFO: (16) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 3.444431ms) Apr 16 00:25:39.073: INFO: (16) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 3.49896ms) Apr 16 00:25:39.073: INFO: (16) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 3.437601ms) Apr 16 00:25:39.073: INFO: (16) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 3.486892ms) Apr 16 00:25:39.073: INFO: (16) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 3.496274ms) Apr 16 00:25:39.073: INFO: (16) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 3.524999ms) Apr 16 00:25:39.073: INFO: (16) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 4.024409ms) Apr 16 00:25:39.074: INFO: (16) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 4.206528ms) Apr 16 00:25:39.074: INFO: (16) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 4.253602ms) Apr 16 00:25:39.074: INFO: (16) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 4.262029ms) Apr 16 00:25:39.074: INFO: (16) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 4.375381ms) Apr 16 00:25:39.074: INFO: (16) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 4.767053ms) Apr 16 00:25:39.078: INFO: (17) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 3.522021ms) Apr 16 00:25:39.078: INFO: (17) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 3.532559ms) Apr 16 00:25:39.078: INFO: (17) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:1080/proxy/: test<... (200; 3.818156ms) Apr 16 00:25:39.078: INFO: (17) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: ... (200; 3.877026ms) Apr 16 00:25:39.078: INFO: (17) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 3.978594ms) Apr 16 00:25:39.078: INFO: (17) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 4.084196ms) Apr 16 00:25:39.078: INFO: (17) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 4.184654ms) Apr 16 00:25:39.079: INFO: (17) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 4.399116ms) Apr 16 00:25:39.079: INFO: (17) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 4.395473ms) Apr 16 00:25:39.079: INFO: (17) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 4.735691ms) Apr 16 00:25:39.079: INFO: (17) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 4.681273ms) Apr 16 00:25:39.079: INFO: (17) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 4.814857ms) Apr 16 00:25:39.079: INFO: (17) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 4.887008ms) Apr 16 00:25:39.079: INFO: (17) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 4.959853ms) Apr 16 00:25:39.079: INFO: (17) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 5.101472ms) Apr 16 00:25:39.081: INFO: (18) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 1.938265ms) Apr 16 00:25:39.084: INFO: (18) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 4.525357ms) Apr 16 00:25:39.084: INFO: (18) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 4.584243ms) Apr 16 00:25:39.084: INFO: (18) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 4.713062ms) Apr 16 00:25:39.085: INFO: (18) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 5.112841ms) Apr 16 00:25:39.085: INFO: (18) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:1080/proxy/: ... (200; 5.150052ms) Apr 16 00:25:39.085: INFO: (18) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 5.268105ms) Apr 16 00:25:39.085: INFO: (18) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 5.348986ms) Apr 16 00:25:39.085: INFO: (18) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: test<... (200; 5.246642ms) Apr 16 00:25:39.085: INFO: (18) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 5.375232ms) Apr 16 00:25:39.085: INFO: (18) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 5.353293ms) Apr 16 00:25:39.085: INFO: (18) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 5.383475ms) Apr 16 00:25:39.085: INFO: (18) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 5.398837ms) Apr 16 00:25:39.085: INFO: (18) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 5.439096ms) Apr 16 00:25:39.090: INFO: (19) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:1080/proxy/: test<... (200; 4.491112ms) Apr 16 00:25:39.090: INFO: (19) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 4.666341ms) Apr 16 00:25:39.090: INFO: (19) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:160/proxy/: foo (200; 4.628902ms) Apr 16 00:25:39.090: INFO: (19) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:462/proxy/: tls qux (200; 4.678923ms) Apr 16 00:25:39.090: INFO: (19) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:460/proxy/: tls baz (200; 4.661132ms) Apr 16 00:25:39.090: INFO: (19) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v/proxy/: test (200; 4.750936ms) Apr 16 00:25:39.090: INFO: (19) /api/v1/namespaces/proxy-9578/pods/proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 4.691435ms) Apr 16 00:25:39.090: INFO: (19) /api/v1/namespaces/proxy-9578/pods/https:proxy-service-2x4vw-sw72v:443/proxy/: ... (200; 4.72159ms) Apr 16 00:25:39.090: INFO: (19) /api/v1/namespaces/proxy-9578/pods/http:proxy-service-2x4vw-sw72v:162/proxy/: bar (200; 4.761497ms) Apr 16 00:25:39.091: INFO: (19) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname2/proxy/: bar (200; 5.595506ms) Apr 16 00:25:39.091: INFO: (19) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname2/proxy/: bar (200; 5.533315ms) Apr 16 00:25:39.091: INFO: (19) /api/v1/namespaces/proxy-9578/services/http:proxy-service-2x4vw:portname1/proxy/: foo (200; 5.711742ms) Apr 16 00:25:39.091: INFO: (19) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname1/proxy/: tls baz (200; 5.769647ms) Apr 16 00:25:39.091: INFO: (19) /api/v1/namespaces/proxy-9578/services/proxy-service-2x4vw:portname1/proxy/: foo (200; 5.772851ms) Apr 16 00:25:39.091: INFO: (19) /api/v1/namespaces/proxy-9578/services/https:proxy-service-2x4vw:tlsportname2/proxy/: tls qux (200; 5.948254ms) STEP: deleting ReplicationController proxy-service-2x4vw in namespace proxy-9578, will wait for the garbage collector to delete the pods Apr 16 00:25:39.150: INFO: Deleting ReplicationController proxy-service-2x4vw took: 7.422909ms Apr 16 00:25:39.250: INFO: Terminating ReplicationController proxy-service-2x4vw pods took: 100.258362ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:25:53.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9578" for this suite. • [SLOW TEST:19.393 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":177,"skipped":2995,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:25:53.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-8a395e38-675d-4a64-8f6b-a0be7f10f614 in namespace container-probe-4921 Apr 16 00:25:57.149: INFO: Started pod liveness-8a395e38-675d-4a64-8f6b-a0be7f10f614 in namespace container-probe-4921 STEP: checking the pod's current state and verifying that restartCount is present Apr 16 00:25:57.153: INFO: Initial restart count of pod liveness-8a395e38-675d-4a64-8f6b-a0be7f10f614 is 0 Apr 16 00:26:13.190: INFO: Restart count of pod container-probe-4921/liveness-8a395e38-675d-4a64-8f6b-a0be7f10f614 is now 1 (16.037647525s elapsed) Apr 16 00:26:33.233: INFO: Restart count of pod container-probe-4921/liveness-8a395e38-675d-4a64-8f6b-a0be7f10f614 is now 2 (36.080535236s elapsed) Apr 16 00:26:53.272: INFO: Restart count of pod container-probe-4921/liveness-8a395e38-675d-4a64-8f6b-a0be7f10f614 is now 3 (56.119414995s elapsed) Apr 16 00:27:13.347: INFO: Restart count of pod container-probe-4921/liveness-8a395e38-675d-4a64-8f6b-a0be7f10f614 is now 4 (1m16.194570351s elapsed) Apr 16 00:28:13.794: INFO: Restart count of pod container-probe-4921/liveness-8a395e38-675d-4a64-8f6b-a0be7f10f614 is now 5 (2m16.641936182s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:28:13.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4921" for this suite. • [SLOW TEST:140.775 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:28:13.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-jzb7 STEP: Creating a pod to test atomic-volume-subpath Apr 16 00:28:13.941: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jzb7" in namespace "subpath-5393" to be "Succeeded or Failed" Apr 16 00:28:13.944: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.206925ms Apr 16 00:28:15.948: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006761582s Apr 16 00:28:17.952: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Running", Reason="", readiness=true. Elapsed: 4.010728657s Apr 16 00:28:19.956: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Running", Reason="", readiness=true. Elapsed: 6.01530793s Apr 16 00:28:21.960: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Running", Reason="", readiness=true. Elapsed: 8.019282881s Apr 16 00:28:23.964: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Running", Reason="", readiness=true. Elapsed: 10.023542797s Apr 16 00:28:25.969: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Running", Reason="", readiness=true. Elapsed: 12.028147286s Apr 16 00:28:27.973: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Running", Reason="", readiness=true. Elapsed: 14.032024131s Apr 16 00:28:29.977: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Running", Reason="", readiness=true. Elapsed: 16.036515958s Apr 16 00:28:31.981: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Running", Reason="", readiness=true. Elapsed: 18.040494736s Apr 16 00:28:33.985: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Running", Reason="", readiness=true. Elapsed: 20.044250415s Apr 16 00:28:36.327: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Running", Reason="", readiness=true. Elapsed: 22.38626359s Apr 16 00:28:38.331: INFO: Pod "pod-subpath-test-configmap-jzb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.39056552s STEP: Saw pod success Apr 16 00:28:38.331: INFO: Pod "pod-subpath-test-configmap-jzb7" satisfied condition "Succeeded or Failed" Apr 16 00:28:38.334: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-jzb7 container test-container-subpath-configmap-jzb7: STEP: delete the pod Apr 16 00:28:38.368: INFO: Waiting for pod pod-subpath-test-configmap-jzb7 to disappear Apr 16 00:28:38.378: INFO: Pod pod-subpath-test-configmap-jzb7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-jzb7 Apr 16 00:28:38.378: INFO: Deleting pod "pod-subpath-test-configmap-jzb7" in namespace "subpath-5393" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:28:38.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5393" for this suite. • [SLOW TEST:24.556 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":179,"skipped":3038,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:28:38.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:28:55.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9446" for this suite. • [SLOW TEST:17.148 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":180,"skipped":3060,"failed":0} SSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:28:55.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 16 00:29:00.208: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e6049c8a-f769-43bc-b68b-910cee646679" Apr 16 00:29:00.208: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e6049c8a-f769-43bc-b68b-910cee646679" in namespace "pods-3321" to be "terminated due to deadline exceeded" Apr 16 00:29:00.228: INFO: Pod "pod-update-activedeadlineseconds-e6049c8a-f769-43bc-b68b-910cee646679": Phase="Running", Reason="", readiness=true. Elapsed: 19.677008ms Apr 16 00:29:02.232: INFO: Pod "pod-update-activedeadlineseconds-e6049c8a-f769-43bc-b68b-910cee646679": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.023891859s Apr 16 00:29:02.232: INFO: Pod "pod-update-activedeadlineseconds-e6049c8a-f769-43bc-b68b-910cee646679" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:29:02.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3321" for this suite. • [SLOW TEST:6.682 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3064,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:29:02.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 16 00:29:02.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8416' Apr 16 00:29:02.527: INFO: stderr: "" Apr 16 00:29:02.527: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 16 00:29:07.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8416 -o json' Apr 16 00:29:07.672: INFO: stderr: "" Apr 16 00:29:07.672: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-16T00:29:02Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8416\",\n \"resourceVersion\": \"8408990\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8416/pods/e2e-test-httpd-pod\",\n \"uid\": \"2c0b1325-31d9-450d-8e2e-33d3d4d445d9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rb2bk\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rb2bk\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rb2bk\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-16T00:29:02Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-16T00:29:05Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-16T00:29:05Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-16T00:29:02Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f93129a7a369554f88c5321114d4c3bd72059c3d108a592c4344355c219de1de\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-16T00:29:04Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.34\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.34\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-16T00:29:02Z\"\n }\n}\n" STEP: replace the image in the pod Apr 16 00:29:07.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8416' Apr 16 00:29:07.950: INFO: stderr: "" Apr 16 00:29:07.950: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 16 00:29:07.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8416' Apr 16 00:29:11.337: INFO: stderr: "" Apr 16 00:29:11.337: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:29:11.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8416" for this suite. • [SLOW TEST:9.113 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":182,"skipped":3068,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:29:11.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-a58e38d1-abac-427a-b01a-262a4236aa53 STEP: Creating a pod to test consume secrets Apr 16 00:29:11.417: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5d68f6a9-4599-4d9b-b848-e88d7f9b5c0b" in namespace "projected-1097" to be "Succeeded or Failed" Apr 16 00:29:11.433: INFO: Pod "pod-projected-secrets-5d68f6a9-4599-4d9b-b848-e88d7f9b5c0b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.216439ms Apr 16 00:29:13.437: INFO: Pod "pod-projected-secrets-5d68f6a9-4599-4d9b-b848-e88d7f9b5c0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020179817s Apr 16 00:29:15.442: INFO: Pod "pod-projected-secrets-5d68f6a9-4599-4d9b-b848-e88d7f9b5c0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024634773s STEP: Saw pod success Apr 16 00:29:15.442: INFO: Pod "pod-projected-secrets-5d68f6a9-4599-4d9b-b848-e88d7f9b5c0b" satisfied condition "Succeeded or Failed" Apr 16 00:29:15.445: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-5d68f6a9-4599-4d9b-b848-e88d7f9b5c0b container projected-secret-volume-test: STEP: delete the pod Apr 16 00:29:15.568: INFO: Waiting for pod pod-projected-secrets-5d68f6a9-4599-4d9b-b848-e88d7f9b5c0b to disappear Apr 16 00:29:15.574: INFO: Pod pod-projected-secrets-5d68f6a9-4599-4d9b-b848-e88d7f9b5c0b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:29:15.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1097" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3069,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:29:15.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:29:15.831: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 16 00:29:15.838: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:15.843: INFO: Number of nodes with available pods: 0 Apr 16 00:29:15.843: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:29:16.848: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:16.850: INFO: Number of nodes with available pods: 0 Apr 16 00:29:16.850: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:29:17.848: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:17.851: INFO: Number of nodes with available pods: 0 Apr 16 00:29:17.851: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:29:18.847: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:18.851: INFO: Number of nodes with available pods: 0 Apr 16 00:29:18.851: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:29:19.849: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:19.852: INFO: Number of nodes with available pods: 1 Apr 16 00:29:19.852: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:29:20.886: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:20.898: INFO: Number of nodes with available pods: 2 Apr 16 00:29:20.898: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 16 00:29:20.935: INFO: Wrong image for pod: daemon-set-95s6m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:20.935: INFO: Wrong image for pod: daemon-set-cz7wf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:20.947: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:21.951: INFO: Wrong image for pod: daemon-set-95s6m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:21.951: INFO: Wrong image for pod: daemon-set-cz7wf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:21.956: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:22.951: INFO: Wrong image for pod: daemon-set-95s6m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:22.951: INFO: Pod daemon-set-95s6m is not available Apr 16 00:29:22.951: INFO: Wrong image for pod: daemon-set-cz7wf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:22.963: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:23.951: INFO: Pod daemon-set-4gms8 is not available Apr 16 00:29:23.951: INFO: Wrong image for pod: daemon-set-cz7wf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:23.955: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:25.190: INFO: Pod daemon-set-4gms8 is not available Apr 16 00:29:25.190: INFO: Wrong image for pod: daemon-set-cz7wf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:25.478: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:25.951: INFO: Pod daemon-set-4gms8 is not available Apr 16 00:29:25.952: INFO: Wrong image for pod: daemon-set-cz7wf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:25.956: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:26.951: INFO: Wrong image for pod: daemon-set-cz7wf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:26.956: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:27.951: INFO: Wrong image for pod: daemon-set-cz7wf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:27.954: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:28.951: INFO: Wrong image for pod: daemon-set-cz7wf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:28.951: INFO: Pod daemon-set-cz7wf is not available Apr 16 00:29:28.956: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:29.951: INFO: Wrong image for pod: daemon-set-cz7wf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:29.951: INFO: Pod daemon-set-cz7wf is not available Apr 16 00:29:29.956: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:30.951: INFO: Wrong image for pod: daemon-set-cz7wf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:30.951: INFO: Pod daemon-set-cz7wf is not available Apr 16 00:29:30.956: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:31.951: INFO: Wrong image for pod: daemon-set-cz7wf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 16 00:29:31.951: INFO: Pod daemon-set-cz7wf is not available Apr 16 00:29:31.956: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:32.963: INFO: Pod daemon-set-4h6nk is not available Apr 16 00:29:32.966: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 16 00:29:32.973: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:32.995: INFO: Number of nodes with available pods: 1 Apr 16 00:29:32.995: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:29:34.000: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:34.003: INFO: Number of nodes with available pods: 1 Apr 16 00:29:34.003: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:29:35.003: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:35.006: INFO: Number of nodes with available pods: 1 Apr 16 00:29:35.006: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:29:36.000: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:29:36.003: INFO: Number of nodes with available pods: 2 Apr 16 00:29:36.003: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2410, will wait for the garbage collector to delete the pods Apr 16 00:29:36.078: INFO: Deleting DaemonSet.extensions daemon-set took: 6.024618ms Apr 16 00:29:36.378: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.246618ms Apr 16 00:29:43.081: INFO: Number of nodes with available pods: 0 Apr 16 00:29:43.081: INFO: Number of running nodes: 0, number of available pods: 0 Apr 16 00:29:43.084: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2410/daemonsets","resourceVersion":"8409238"},"items":null} Apr 16 00:29:43.087: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2410/pods","resourceVersion":"8409238"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:29:43.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2410" for this suite. • [SLOW TEST:27.518 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":184,"skipped":3116,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:29:43.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-20531e9d-5de9-4e37-9cc8-565b40cafdab in namespace container-probe-6836 Apr 16 00:29:47.256: INFO: Started pod liveness-20531e9d-5de9-4e37-9cc8-565b40cafdab in namespace container-probe-6836 STEP: checking the pod's current state and verifying that restartCount is present Apr 16 00:29:47.258: INFO: Initial restart count of pod liveness-20531e9d-5de9-4e37-9cc8-565b40cafdab is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:33:47.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6836" for this suite. • [SLOW TEST:244.774 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3132,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:33:47.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-92d9835f-8ce6-40a6-8726-cbff16064c9b STEP: Creating configMap with name cm-test-opt-upd-a4ea0bf2-d2ae-40d4-bfe3-96c9260e4130 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-92d9835f-8ce6-40a6-8726-cbff16064c9b STEP: Updating configmap cm-test-opt-upd-a4ea0bf2-d2ae-40d4-bfe3-96c9260e4130 STEP: Creating configMap with name cm-test-opt-create-fb0bf8dd-3866-4790-9754-1ecaf362f27b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:35:00.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3418" for this suite. • [SLOW TEST:72.807 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3139,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:35:00.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-bkt7 STEP: Creating a pod to test atomic-volume-subpath Apr 16 00:35:00.796: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-bkt7" in namespace "subpath-7265" to be "Succeeded or Failed" Apr 16 00:35:00.845: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Pending", Reason="", readiness=false. Elapsed: 48.705089ms Apr 16 00:35:02.848: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052322569s Apr 16 00:35:04.852: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Running", Reason="", readiness=true. Elapsed: 4.056282817s Apr 16 00:35:06.856: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Running", Reason="", readiness=true. Elapsed: 6.059880574s Apr 16 00:35:08.860: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Running", Reason="", readiness=true. Elapsed: 8.06424912s Apr 16 00:35:10.864: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Running", Reason="", readiness=true. Elapsed: 10.068379483s Apr 16 00:35:12.869: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Running", Reason="", readiness=true. Elapsed: 12.072419157s Apr 16 00:35:14.873: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Running", Reason="", readiness=true. Elapsed: 14.076430013s Apr 16 00:35:16.876: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Running", Reason="", readiness=true. Elapsed: 16.07960849s Apr 16 00:35:18.880: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Running", Reason="", readiness=true. Elapsed: 18.083404324s Apr 16 00:35:20.883: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Running", Reason="", readiness=true. Elapsed: 20.087363724s Apr 16 00:35:22.888: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Running", Reason="", readiness=true. Elapsed: 22.091575929s Apr 16 00:35:24.892: INFO: Pod "pod-subpath-test-secret-bkt7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.095726173s STEP: Saw pod success Apr 16 00:35:24.892: INFO: Pod "pod-subpath-test-secret-bkt7" satisfied condition "Succeeded or Failed" Apr 16 00:35:24.895: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-bkt7 container test-container-subpath-secret-bkt7: STEP: delete the pod Apr 16 00:35:24.930: INFO: Waiting for pod pod-subpath-test-secret-bkt7 to disappear Apr 16 00:35:24.935: INFO: Pod pod-subpath-test-secret-bkt7 no longer exists STEP: Deleting pod pod-subpath-test-secret-bkt7 Apr 16 00:35:24.935: INFO: Deleting pod "pod-subpath-test-secret-bkt7" in namespace "subpath-7265" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:35:24.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7265" for this suite. • [SLOW TEST:24.255 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":187,"skipped":3152,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:35:24.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 16 00:35:30.852: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:35:30.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8445" for this suite. • [SLOW TEST:6.011 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":188,"skipped":3157,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:35:30.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:35:31.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7886" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3165,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:35:31.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 00:35:31.255: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f099b63-d46a-4e17-96c3-109bc38a6c6a" in namespace "downward-api-1984" to be "Succeeded or Failed" Apr 16 00:35:31.258: INFO: Pod "downwardapi-volume-5f099b63-d46a-4e17-96c3-109bc38a6c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.898693ms Apr 16 00:35:33.338: INFO: Pod "downwardapi-volume-5f099b63-d46a-4e17-96c3-109bc38a6c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082855778s Apr 16 00:35:35.342: INFO: Pod "downwardapi-volume-5f099b63-d46a-4e17-96c3-109bc38a6c6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086883729s STEP: Saw pod success Apr 16 00:35:35.342: INFO: Pod "downwardapi-volume-5f099b63-d46a-4e17-96c3-109bc38a6c6a" satisfied condition "Succeeded or Failed" Apr 16 00:35:35.367: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-5f099b63-d46a-4e17-96c3-109bc38a6c6a container client-container: STEP: delete the pod Apr 16 00:35:35.398: INFO: Waiting for pod downwardapi-volume-5f099b63-d46a-4e17-96c3-109bc38a6c6a to disappear Apr 16 00:35:35.414: INFO: Pod downwardapi-volume-5f099b63-d46a-4e17-96c3-109bc38a6c6a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:35:35.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1984" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3168,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:35:35.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-4218 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4218 to expose endpoints map[] Apr 16 00:35:35.570: INFO: Get endpoints failed (15.870335ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 16 00:35:36.618: INFO: successfully validated that service endpoint-test2 in namespace services-4218 exposes endpoints map[] (1.064040999s elapsed) STEP: Creating pod pod1 in namespace services-4218 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4218 to expose endpoints map[pod1:[80]] Apr 16 00:35:39.898: INFO: successfully validated that service endpoint-test2 in namespace services-4218 exposes endpoints map[pod1:[80]] (3.230349795s elapsed) STEP: Creating pod pod2 in namespace services-4218 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4218 to expose endpoints map[pod1:[80] pod2:[80]] Apr 16 00:35:43.032: INFO: successfully validated that service endpoint-test2 in namespace services-4218 exposes endpoints map[pod1:[80] pod2:[80]] (3.131458592s elapsed) STEP: Deleting pod pod1 in namespace services-4218 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4218 to expose endpoints map[pod2:[80]] Apr 16 00:35:44.083: INFO: successfully validated that service endpoint-test2 in namespace services-4218 exposes endpoints map[pod2:[80]] (1.046414563s elapsed) STEP: Deleting pod pod2 in namespace services-4218 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4218 to expose endpoints map[] Apr 16 00:35:45.212: INFO: successfully validated that service endpoint-test2 in namespace services-4218 exposes endpoints map[] (1.124904475s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:35:45.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4218" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:9.937 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":191,"skipped":3183,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:35:45.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:35:50.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4984" for this suite. • [SLOW TEST:5.138 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":192,"skipped":3186,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:35:50.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 16 00:35:50.577: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:35:50.601: INFO: Number of nodes with available pods: 0 Apr 16 00:35:50.601: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:35:51.606: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:35:51.609: INFO: Number of nodes with available pods: 0 Apr 16 00:35:51.609: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:35:52.606: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:35:52.609: INFO: Number of nodes with available pods: 0 Apr 16 00:35:52.609: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:35:53.632: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:35:53.636: INFO: Number of nodes with available pods: 0 Apr 16 00:35:53.636: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:35:54.605: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:35:54.609: INFO: Number of nodes with available pods: 1 Apr 16 00:35:54.609: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:35:55.605: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:35:55.608: INFO: Number of nodes with available pods: 2 Apr 16 00:35:55.608: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 16 00:35:55.644: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:35:55.649: INFO: Number of nodes with available pods: 1 Apr 16 00:35:55.649: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:35:56.655: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:35:56.658: INFO: Number of nodes with available pods: 1 Apr 16 00:35:56.658: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:35:57.653: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:35:57.656: INFO: Number of nodes with available pods: 1 Apr 16 00:35:57.656: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:35:58.655: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:35:58.659: INFO: Number of nodes with available pods: 1 Apr 16 00:35:58.659: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:35:59.653: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:35:59.657: INFO: Number of nodes with available pods: 1 Apr 16 00:35:59.657: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:36:00.662: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:36:00.666: INFO: Number of nodes with available pods: 1 Apr 16 00:36:00.666: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:36:01.654: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:36:01.657: INFO: Number of nodes with available pods: 1 Apr 16 00:36:01.657: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:36:02.654: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:36:02.658: INFO: Number of nodes with available pods: 1 Apr 16 00:36:02.658: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:36:03.662: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:36:03.666: INFO: Number of nodes with available pods: 1 Apr 16 00:36:03.666: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:36:04.655: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:36:04.659: INFO: Number of nodes with available pods: 1 Apr 16 00:36:04.659: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:36:05.680: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:36:05.684: INFO: Number of nodes with available pods: 1 Apr 16 00:36:05.684: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:36:06.655: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:36:06.658: INFO: Number of nodes with available pods: 2 Apr 16 00:36:06.658: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6758, will wait for the garbage collector to delete the pods Apr 16 00:36:06.720: INFO: Deleting DaemonSet.extensions daemon-set took: 5.720726ms Apr 16 00:36:07.020: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.327774ms Apr 16 00:36:13.023: INFO: Number of nodes with available pods: 0 Apr 16 00:36:13.023: INFO: Number of running nodes: 0, number of available pods: 0 Apr 16 00:36:13.026: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6758/daemonsets","resourceVersion":"8410720"},"items":null} Apr 16 00:36:13.028: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6758/pods","resourceVersion":"8410720"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:36:13.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6758" for this suite. • [SLOW TEST:22.547 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":193,"skipped":3192,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:36:13.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 16 00:36:13.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-237 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 16 00:36:15.429: INFO: stderr: "" Apr 16 00:36:15.429: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 16 00:36:15.429: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 16 00:36:15.429: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-237" to be "running and ready, or succeeded" Apr 16 00:36:15.442: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 13.153865ms Apr 16 00:36:17.476: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046873976s Apr 16 00:36:19.482: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.053116088s Apr 16 00:36:19.482: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 16 00:36:19.482: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 16 00:36:19.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-237' Apr 16 00:36:19.605: INFO: stderr: "" Apr 16 00:36:19.605: INFO: stdout: "I0416 00:36:17.626355 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/mrt 592\nI0416 00:36:17.826565 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/gj95 337\nI0416 00:36:18.026603 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/lwp9 220\nI0416 00:36:18.226570 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/nxbd 214\nI0416 00:36:18.426509 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/h7f8 228\nI0416 00:36:18.626545 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/s97v 324\nI0416 00:36:18.826562 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/xlw6 482\nI0416 00:36:19.026519 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/twmf 526\nI0416 00:36:19.226556 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/w9mw 450\nI0416 00:36:19.426543 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/8ng 238\n" STEP: limiting log lines Apr 16 00:36:19.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-237 --tail=1' Apr 16 00:36:19.749: INFO: stderr: "" Apr 16 00:36:19.749: INFO: stdout: "I0416 00:36:19.626581 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/rwj 597\n" Apr 16 00:36:19.749: INFO: got output "I0416 00:36:19.626581 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/rwj 597\n" STEP: limiting log bytes Apr 16 00:36:19.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-237 --limit-bytes=1' Apr 16 00:36:19.862: INFO: stderr: "" Apr 16 00:36:19.862: INFO: stdout: "I" Apr 16 00:36:19.862: INFO: got output "I" STEP: exposing timestamps Apr 16 00:36:19.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-237 --tail=1 --timestamps' Apr 16 00:36:19.986: INFO: stderr: "" Apr 16 00:36:19.986: INFO: stdout: "2020-04-16T00:36:19.826628856Z I0416 00:36:19.826480 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/mchs 586\n" Apr 16 00:36:19.986: INFO: got output "2020-04-16T00:36:19.826628856Z I0416 00:36:19.826480 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/mchs 586\n" STEP: restricting to a time range Apr 16 00:36:22.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-237 --since=1s' Apr 16 00:36:22.590: INFO: stderr: "" Apr 16 00:36:22.590: INFO: stdout: "I0416 00:36:21.626588 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/bp2 317\nI0416 00:36:21.826572 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/rk5 263\nI0416 00:36:22.026508 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/24lm 332\nI0416 00:36:22.226571 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/xw4 325\nI0416 00:36:22.426560 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/qz2k 331\n" Apr 16 00:36:22.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-237 --since=24h' Apr 16 00:36:22.694: INFO: stderr: "" Apr 16 00:36:22.694: INFO: stdout: "I0416 00:36:17.626355 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/mrt 592\nI0416 00:36:17.826565 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/gj95 337\nI0416 00:36:18.026603 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/lwp9 220\nI0416 00:36:18.226570 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/nxbd 214\nI0416 00:36:18.426509 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/h7f8 228\nI0416 00:36:18.626545 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/s97v 324\nI0416 00:36:18.826562 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/xlw6 482\nI0416 00:36:19.026519 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/twmf 526\nI0416 00:36:19.226556 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/w9mw 450\nI0416 00:36:19.426543 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/8ng 238\nI0416 00:36:19.626581 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/rwj 597\nI0416 00:36:19.826480 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/mchs 586\nI0416 00:36:20.026573 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/trrx 460\nI0416 00:36:20.226558 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/m55 334\nI0416 00:36:20.426564 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/r84d 508\nI0416 00:36:20.626517 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/x2s9 409\nI0416 00:36:20.826556 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/ftp6 597\nI0416 00:36:21.026596 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/9b2 584\nI0416 00:36:21.226564 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/fdt 450\nI0416 00:36:21.426534 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/df62 511\nI0416 00:36:21.626588 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/bp2 317\nI0416 00:36:21.826572 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/rk5 263\nI0416 00:36:22.026508 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/24lm 332\nI0416 00:36:22.226571 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/xw4 325\nI0416 00:36:22.426560 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/qz2k 331\nI0416 00:36:22.626502 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/rj2 326\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 16 00:36:22.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-237' Apr 16 00:36:32.744: INFO: stderr: "" Apr 16 00:36:32.744: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:36:32.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-237" for this suite. • [SLOW TEST:19.706 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":194,"skipped":3202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:36:32.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:36:32.811: INFO: Creating ReplicaSet my-hostname-basic-f54fa630-a82f-4062-9f31-6c7139f37ca4 Apr 16 00:36:32.833: INFO: Pod name my-hostname-basic-f54fa630-a82f-4062-9f31-6c7139f37ca4: Found 0 pods out of 1 Apr 16 00:36:37.846: INFO: Pod name my-hostname-basic-f54fa630-a82f-4062-9f31-6c7139f37ca4: Found 1 pods out of 1 Apr 16 00:36:37.846: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f54fa630-a82f-4062-9f31-6c7139f37ca4" is running Apr 16 00:36:37.849: INFO: Pod "my-hostname-basic-f54fa630-a82f-4062-9f31-6c7139f37ca4-g8hdg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-16 00:36:32 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-16 00:36:36 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-16 00:36:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-16 00:36:32 +0000 UTC Reason: Message:}]) Apr 16 00:36:37.849: INFO: Trying to dial the pod Apr 16 00:36:42.879: INFO: Controller my-hostname-basic-f54fa630-a82f-4062-9f31-6c7139f37ca4: Got expected result from replica 1 [my-hostname-basic-f54fa630-a82f-4062-9f31-6c7139f37ca4-g8hdg]: "my-hostname-basic-f54fa630-a82f-4062-9f31-6c7139f37ca4-g8hdg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:36:42.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6521" for this suite. • [SLOW TEST:10.133 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":195,"skipped":3233,"failed":0} [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:36:42.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-b03f04b6-3055-4c27-b363-f6a0e6102340 STEP: Creating a pod to test consume secrets Apr 16 00:36:42.944: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3502ae39-3dc8-48ed-9bf9-59acd55cb99d" in namespace "projected-4408" to be "Succeeded or Failed" Apr 16 00:36:42.949: INFO: Pod "pod-projected-secrets-3502ae39-3dc8-48ed-9bf9-59acd55cb99d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.862298ms Apr 16 00:36:44.953: INFO: Pod "pod-projected-secrets-3502ae39-3dc8-48ed-9bf9-59acd55cb99d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008064363s Apr 16 00:36:46.957: INFO: Pod "pod-projected-secrets-3502ae39-3dc8-48ed-9bf9-59acd55cb99d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012461667s STEP: Saw pod success Apr 16 00:36:46.957: INFO: Pod "pod-projected-secrets-3502ae39-3dc8-48ed-9bf9-59acd55cb99d" satisfied condition "Succeeded or Failed" Apr 16 00:36:46.961: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-3502ae39-3dc8-48ed-9bf9-59acd55cb99d container secret-volume-test: STEP: delete the pod Apr 16 00:36:46.991: INFO: Waiting for pod pod-projected-secrets-3502ae39-3dc8-48ed-9bf9-59acd55cb99d to disappear Apr 16 00:36:47.003: INFO: Pod pod-projected-secrets-3502ae39-3dc8-48ed-9bf9-59acd55cb99d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:36:47.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4408" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:36:47.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 16 00:36:47.077: INFO: Waiting up to 5m0s for pod "pod-681627b8-0fd4-417a-b729-b383197432d3" in namespace "emptydir-5530" to be "Succeeded or Failed" Apr 16 00:36:47.099: INFO: Pod "pod-681627b8-0fd4-417a-b729-b383197432d3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.095208ms Apr 16 00:36:49.103: INFO: Pod "pod-681627b8-0fd4-417a-b729-b383197432d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025914862s Apr 16 00:36:51.107: INFO: Pod "pod-681627b8-0fd4-417a-b729-b383197432d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030091516s STEP: Saw pod success Apr 16 00:36:51.107: INFO: Pod "pod-681627b8-0fd4-417a-b729-b383197432d3" satisfied condition "Succeeded or Failed" Apr 16 00:36:51.110: INFO: Trying to get logs from node latest-worker pod pod-681627b8-0fd4-417a-b729-b383197432d3 container test-container: STEP: delete the pod Apr 16 00:36:51.155: INFO: Waiting for pod pod-681627b8-0fd4-417a-b729-b383197432d3 to disappear Apr 16 00:36:51.159: INFO: Pod pod-681627b8-0fd4-417a-b729-b383197432d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:36:51.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5530" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:36:51.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 16 00:36:55.747: INFO: Successfully updated pod "labelsupdate388cbc57-c896-4bf1-a7a8-7f005d1d96a9" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:36:57.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4050" for this suite. • [SLOW TEST:6.603 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3315,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:36:57.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9725 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-9725 I0416 00:36:57.949979 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9725, replica count: 2 I0416 00:37:01.000465 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 00:37:04.000690 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 00:37:04.000: INFO: Creating new exec pod Apr 16 00:37:09.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9725 execpodd4vkm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 16 00:37:09.265: INFO: stderr: "I0416 00:37:09.159082 2377 log.go:172] (0xc000a96000) (0xc0005ae000) Create stream\nI0416 00:37:09.159148 2377 log.go:172] (0xc000a96000) (0xc0005ae000) Stream added, broadcasting: 1\nI0416 00:37:09.161619 2377 log.go:172] (0xc000a96000) Reply frame received for 1\nI0416 00:37:09.161656 2377 log.go:172] (0xc000a96000) (0xc0005ae140) Create stream\nI0416 00:37:09.161667 2377 log.go:172] (0xc000a96000) (0xc0005ae140) Stream added, broadcasting: 3\nI0416 00:37:09.162730 2377 log.go:172] (0xc000a96000) Reply frame received for 3\nI0416 00:37:09.162770 2377 log.go:172] (0xc000a96000) (0xc00080b360) Create stream\nI0416 00:37:09.162785 2377 log.go:172] (0xc000a96000) (0xc00080b360) Stream added, broadcasting: 5\nI0416 00:37:09.163793 2377 log.go:172] (0xc000a96000) Reply frame received for 5\nI0416 00:37:09.258655 2377 log.go:172] (0xc000a96000) Data frame received for 5\nI0416 00:37:09.258703 2377 log.go:172] (0xc00080b360) (5) Data frame handling\nI0416 00:37:09.258724 2377 log.go:172] (0xc00080b360) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0416 00:37:09.258748 2377 log.go:172] (0xc000a96000) Data frame received for 5\nI0416 00:37:09.258766 2377 log.go:172] (0xc000a96000) Data frame received for 3\nI0416 00:37:09.258792 2377 log.go:172] (0xc0005ae140) (3) Data frame handling\nI0416 00:37:09.258846 2377 log.go:172] (0xc00080b360) (5) Data frame handling\nI0416 00:37:09.260116 2377 log.go:172] (0xc000a96000) Data frame received for 1\nI0416 00:37:09.260140 2377 log.go:172] (0xc0005ae000) (1) Data frame handling\nI0416 00:37:09.260159 2377 log.go:172] (0xc0005ae000) (1) Data frame sent\nI0416 00:37:09.260181 2377 log.go:172] (0xc000a96000) (0xc0005ae000) Stream removed, broadcasting: 1\nI0416 00:37:09.260206 2377 log.go:172] (0xc000a96000) Go away received\nI0416 00:37:09.260580 2377 log.go:172] (0xc000a96000) (0xc0005ae000) Stream removed, broadcasting: 1\nI0416 00:37:09.260597 2377 log.go:172] (0xc000a96000) (0xc0005ae140) Stream removed, broadcasting: 3\nI0416 00:37:09.260604 2377 log.go:172] (0xc000a96000) (0xc00080b360) Stream removed, broadcasting: 5\n" Apr 16 00:37:09.265: INFO: stdout: "" Apr 16 00:37:09.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9725 execpodd4vkm -- /bin/sh -x -c nc -zv -t -w 2 10.96.48.106 80' Apr 16 00:37:09.453: INFO: stderr: "I0416 00:37:09.378704 2398 log.go:172] (0xc00003a6e0) (0xc0004f0b40) Create stream\nI0416 00:37:09.378759 2398 log.go:172] (0xc00003a6e0) (0xc0004f0b40) Stream added, broadcasting: 1\nI0416 00:37:09.381398 2398 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0416 00:37:09.381438 2398 log.go:172] (0xc00003a6e0) (0xc0007c12c0) Create stream\nI0416 00:37:09.381448 2398 log.go:172] (0xc00003a6e0) (0xc0007c12c0) Stream added, broadcasting: 3\nI0416 00:37:09.382410 2398 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0416 00:37:09.382441 2398 log.go:172] (0xc00003a6e0) (0xc0007c14a0) Create stream\nI0416 00:37:09.382450 2398 log.go:172] (0xc00003a6e0) (0xc0007c14a0) Stream added, broadcasting: 5\nI0416 00:37:09.383393 2398 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0416 00:37:09.446815 2398 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0416 00:37:09.446864 2398 log.go:172] (0xc0007c12c0) (3) Data frame handling\nI0416 00:37:09.446911 2398 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0416 00:37:09.446942 2398 log.go:172] (0xc0007c14a0) (5) Data frame handling\nI0416 00:37:09.446973 2398 log.go:172] (0xc0007c14a0) (5) Data frame sent\nI0416 00:37:09.446997 2398 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0416 00:37:09.447015 2398 log.go:172] (0xc0007c14a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.48.106 80\nConnection to 10.96.48.106 80 port [tcp/http] succeeded!\nI0416 00:37:09.448246 2398 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0416 00:37:09.448282 2398 log.go:172] (0xc0004f0b40) (1) Data frame handling\nI0416 00:37:09.448315 2398 log.go:172] (0xc0004f0b40) (1) Data frame sent\nI0416 00:37:09.448347 2398 log.go:172] (0xc00003a6e0) (0xc0004f0b40) Stream removed, broadcasting: 1\nI0416 00:37:09.448375 2398 log.go:172] (0xc00003a6e0) Go away received\nI0416 00:37:09.448908 2398 log.go:172] (0xc00003a6e0) (0xc0004f0b40) Stream removed, broadcasting: 1\nI0416 00:37:09.448930 2398 log.go:172] (0xc00003a6e0) (0xc0007c12c0) Stream removed, broadcasting: 3\nI0416 00:37:09.448940 2398 log.go:172] (0xc00003a6e0) (0xc0007c14a0) Stream removed, broadcasting: 5\n" Apr 16 00:37:09.453: INFO: stdout: "" Apr 16 00:37:09.453: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:37:09.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9725" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.736 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":199,"skipped":3316,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:37:09.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:37:09.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 16 00:37:10.252: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-16T00:37:10Z generation:1 name:name1 resourceVersion:8411104 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a186d96a-e0f6-438b-b5e7-28611fdcf94a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 16 00:37:20.258: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-16T00:37:20Z generation:1 name:name2 resourceVersion:8411159 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5c85109e-f4aa-4db1-8a32-c04514fe9abd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 16 00:37:30.265: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-16T00:37:10Z generation:2 name:name1 resourceVersion:8411188 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a186d96a-e0f6-438b-b5e7-28611fdcf94a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 16 00:37:40.271: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-16T00:37:20Z generation:2 name:name2 resourceVersion:8411218 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5c85109e-f4aa-4db1-8a32-c04514fe9abd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 16 00:37:50.278: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-16T00:37:10Z generation:2 name:name1 resourceVersion:8411249 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a186d96a-e0f6-438b-b5e7-28611fdcf94a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 16 00:38:00.286: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-16T00:37:20Z generation:2 name:name2 resourceVersion:8411277 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5c85109e-f4aa-4db1-8a32-c04514fe9abd] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:38:10.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-6107" for this suite. • [SLOW TEST:61.299 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":200,"skipped":3325,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:38:10.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-32df5b71-2b03-4652-99d2-4ffc0d2dc7ea STEP: Creating a pod to test consume secrets Apr 16 00:38:10.862: INFO: Waiting up to 5m0s for pod "pod-secrets-3c746e17-aca8-4bd0-ad66-a66af6cdf9a6" in namespace "secrets-1042" to be "Succeeded or Failed" Apr 16 00:38:10.893: INFO: Pod "pod-secrets-3c746e17-aca8-4bd0-ad66-a66af6cdf9a6": Phase="Pending", Reason="", readiness=false. Elapsed: 31.242894ms Apr 16 00:38:12.902: INFO: Pod "pod-secrets-3c746e17-aca8-4bd0-ad66-a66af6cdf9a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040335222s Apr 16 00:38:14.906: INFO: Pod "pod-secrets-3c746e17-aca8-4bd0-ad66-a66af6cdf9a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044589164s STEP: Saw pod success Apr 16 00:38:14.907: INFO: Pod "pod-secrets-3c746e17-aca8-4bd0-ad66-a66af6cdf9a6" satisfied condition "Succeeded or Failed" Apr 16 00:38:14.911: INFO: Trying to get logs from node latest-worker pod pod-secrets-3c746e17-aca8-4bd0-ad66-a66af6cdf9a6 container secret-volume-test: STEP: delete the pod Apr 16 00:38:14.940: INFO: Waiting for pod pod-secrets-3c746e17-aca8-4bd0-ad66-a66af6cdf9a6 to disappear Apr 16 00:38:14.945: INFO: Pod pod-secrets-3c746e17-aca8-4bd0-ad66-a66af6cdf9a6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:38:14.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1042" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3330,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:38:14.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 16 00:38:15.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8111' Apr 16 00:38:15.298: INFO: stderr: "" Apr 16 00:38:15.298: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 16 00:38:15.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8111' Apr 16 00:38:15.427: INFO: stderr: "" Apr 16 00:38:15.427: INFO: stdout: "update-demo-nautilus-lwbv2 update-demo-nautilus-p8vp9 " Apr 16 00:38:15.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwbv2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8111' Apr 16 00:38:15.521: INFO: stderr: "" Apr 16 00:38:15.521: INFO: stdout: "" Apr 16 00:38:15.521: INFO: update-demo-nautilus-lwbv2 is created but not running Apr 16 00:38:20.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8111' Apr 16 00:38:20.619: INFO: stderr: "" Apr 16 00:38:20.619: INFO: stdout: "update-demo-nautilus-lwbv2 update-demo-nautilus-p8vp9 " Apr 16 00:38:20.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwbv2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8111' Apr 16 00:38:20.711: INFO: stderr: "" Apr 16 00:38:20.711: INFO: stdout: "true" Apr 16 00:38:20.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwbv2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8111' Apr 16 00:38:20.797: INFO: stderr: "" Apr 16 00:38:20.797: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 16 00:38:20.797: INFO: validating pod update-demo-nautilus-lwbv2 Apr 16 00:38:20.802: INFO: got data: { "image": "nautilus.jpg" } Apr 16 00:38:20.802: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 16 00:38:20.802: INFO: update-demo-nautilus-lwbv2 is verified up and running Apr 16 00:38:20.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p8vp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8111' Apr 16 00:38:20.892: INFO: stderr: "" Apr 16 00:38:20.892: INFO: stdout: "true" Apr 16 00:38:20.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p8vp9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8111' Apr 16 00:38:20.982: INFO: stderr: "" Apr 16 00:38:20.982: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 16 00:38:20.982: INFO: validating pod update-demo-nautilus-p8vp9 Apr 16 00:38:20.985: INFO: got data: { "image": "nautilus.jpg" } Apr 16 00:38:20.985: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 16 00:38:20.985: INFO: update-demo-nautilus-p8vp9 is verified up and running STEP: scaling down the replication controller Apr 16 00:38:20.987: INFO: scanned /root for discovery docs: Apr 16 00:38:20.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8111' Apr 16 00:38:22.101: INFO: stderr: "" Apr 16 00:38:22.101: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 16 00:38:22.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8111' Apr 16 00:38:22.206: INFO: stderr: "" Apr 16 00:38:22.206: INFO: stdout: "update-demo-nautilus-lwbv2 update-demo-nautilus-p8vp9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 16 00:38:27.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8111' Apr 16 00:38:27.303: INFO: stderr: "" Apr 16 00:38:27.303: INFO: stdout: "update-demo-nautilus-lwbv2 update-demo-nautilus-p8vp9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 16 00:38:32.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8111' Apr 16 00:38:32.397: INFO: stderr: "" Apr 16 00:38:32.397: INFO: stdout: "update-demo-nautilus-lwbv2 update-demo-nautilus-p8vp9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 16 00:38:37.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8111' Apr 16 00:38:37.500: INFO: stderr: "" Apr 16 00:38:37.500: INFO: stdout: "update-demo-nautilus-lwbv2 " Apr 16 00:38:37.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwbv2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8111' Apr 16 00:38:37.587: INFO: stderr: "" Apr 16 00:38:37.587: INFO: stdout: "true" Apr 16 00:38:37.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwbv2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8111' Apr 16 00:38:37.673: INFO: stderr: "" Apr 16 00:38:37.673: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 16 00:38:37.673: INFO: validating pod update-demo-nautilus-lwbv2 Apr 16 00:38:37.676: INFO: got data: { "image": "nautilus.jpg" } Apr 16 00:38:37.676: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 16 00:38:37.676: INFO: update-demo-nautilus-lwbv2 is verified up and running STEP: scaling up the replication controller Apr 16 00:38:37.678: INFO: scanned /root for discovery docs: Apr 16 00:38:37.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8111' Apr 16 00:38:38.815: INFO: stderr: "" Apr 16 00:38:38.815: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 16 00:38:38.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8111' Apr 16 00:38:38.917: INFO: stderr: "" Apr 16 00:38:38.917: INFO: stdout: "update-demo-nautilus-gpc5l update-demo-nautilus-lwbv2 " Apr 16 00:38:38.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpc5l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8111' Apr 16 00:38:39.012: INFO: stderr: "" Apr 16 00:38:39.012: INFO: stdout: "" Apr 16 00:38:39.012: INFO: update-demo-nautilus-gpc5l is created but not running Apr 16 00:38:44.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8111' Apr 16 00:38:44.117: INFO: stderr: "" Apr 16 00:38:44.117: INFO: stdout: "update-demo-nautilus-gpc5l update-demo-nautilus-lwbv2 " Apr 16 00:38:44.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpc5l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8111' Apr 16 00:38:44.217: INFO: stderr: "" Apr 16 00:38:44.217: INFO: stdout: "true" Apr 16 00:38:44.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpc5l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8111' Apr 16 00:38:44.306: INFO: stderr: "" Apr 16 00:38:44.306: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 16 00:38:44.306: INFO: validating pod update-demo-nautilus-gpc5l Apr 16 00:38:44.310: INFO: got data: { "image": "nautilus.jpg" } Apr 16 00:38:44.310: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 16 00:38:44.310: INFO: update-demo-nautilus-gpc5l is verified up and running Apr 16 00:38:44.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwbv2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8111' Apr 16 00:38:44.397: INFO: stderr: "" Apr 16 00:38:44.398: INFO: stdout: "true" Apr 16 00:38:44.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwbv2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8111' Apr 16 00:38:44.483: INFO: stderr: "" Apr 16 00:38:44.483: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 16 00:38:44.483: INFO: validating pod update-demo-nautilus-lwbv2 Apr 16 00:38:44.486: INFO: got data: { "image": "nautilus.jpg" } Apr 16 00:38:44.486: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 16 00:38:44.486: INFO: update-demo-nautilus-lwbv2 is verified up and running STEP: using delete to clean up resources Apr 16 00:38:44.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8111' Apr 16 00:38:44.590: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 16 00:38:44.590: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 16 00:38:44.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8111' Apr 16 00:38:44.694: INFO: stderr: "No resources found in kubectl-8111 namespace.\n" Apr 16 00:38:44.694: INFO: stdout: "" Apr 16 00:38:44.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8111 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 16 00:38:44.794: INFO: stderr: "" Apr 16 00:38:44.794: INFO: stdout: "update-demo-nautilus-gpc5l\nupdate-demo-nautilus-lwbv2\n" Apr 16 00:38:45.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8111' Apr 16 00:38:45.397: INFO: stderr: "No resources found in kubectl-8111 namespace.\n" Apr 16 00:38:45.397: INFO: stdout: "" Apr 16 00:38:45.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8111 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 16 00:38:45.505: INFO: stderr: "" Apr 16 00:38:45.505: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:38:45.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8111" for this suite. • [SLOW TEST:30.562 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":202,"skipped":3344,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:38:45.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 16 00:38:45.608: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:38:53.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1018" for this suite. • [SLOW TEST:7.857 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":203,"skipped":3366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:38:53.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:38:53.447: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 16 00:38:58.451: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 16 00:38:58.452: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 16 00:39:02.525: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3042 /apis/apps/v1/namespaces/deployment-3042/deployments/test-cleanup-deployment a81d391a-d15c-4e0a-9452-b285b11452db 8411658 1 2020-04-16 00:38:58 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039027e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-16 00:38:58 +0000 UTC,LastTransitionTime:2020-04-16 00:38:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-577c77b589" has successfully progressed.,LastUpdateTime:2020-04-16 00:39:01 +0000 UTC,LastTransitionTime:2020-04-16 00:38:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 16 00:39:02.556: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-3042 /apis/apps/v1/namespaces/deployment-3042/replicasets/test-cleanup-deployment-577c77b589 3382f7d9-0f23-46df-a155-2dc73a7d0d1f 8411645 1 2020-04-16 00:38:58 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment a81d391a-d15c-4e0a-9452-b285b11452db 0xc0030668e7 0xc0030668e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003066958 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 16 00:39:02.559: INFO: Pod "test-cleanup-deployment-577c77b589-hdw62" is available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-hdw62 test-cleanup-deployment-577c77b589- deployment-3042 /api/v1/namespaces/deployment-3042/pods/test-cleanup-deployment-577c77b589-hdw62 6e8540f6-c9f8-4f5b-a999-2259359308ac 8411644 0 2020-04-16 00:38:58 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 3382f7d9-0f23-46df-a155-2dc73a7d0d1f 0xc003066d67 0xc003066d68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lclx4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lclx4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lclx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:38:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:39:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:39:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:38:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.83,StartTime:2020-04-16 00:38:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-16 00:39:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://e09c39031eeb978e8702696e709126adceffc35b5f3c3b7f61bf2aced6ca5d89,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:39:02.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3042" for this suite. • [SLOW TEST:9.196 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":204,"skipped":3397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:39:02.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 16 00:39:02.643: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1204" to be "Succeeded or Failed" Apr 16 00:39:02.647: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.459302ms Apr 16 00:39:04.650: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006517315s Apr 16 00:39:06.660: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016822269s STEP: Saw pod success Apr 16 00:39:06.660: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 16 00:39:06.663: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 16 00:39:06.732: INFO: Waiting for pod pod-host-path-test to disappear Apr 16 00:39:06.749: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:39:06.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1204" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3439,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:39:06.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 16 00:39:10.930: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:39:10.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4862" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3448,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:39:10.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-5b9d1fec-162f-48b2-b041-76f1300da03b [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:39:11.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7736" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":207,"skipped":3463,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:39:11.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 00:39:11.109: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13f4eb86-46a9-4e57-8817-4678d4d6d22e" in namespace "downward-api-5206" to be "Succeeded or Failed" Apr 16 00:39:11.137: INFO: Pod "downwardapi-volume-13f4eb86-46a9-4e57-8817-4678d4d6d22e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.747058ms Apr 16 00:39:13.140: INFO: Pod "downwardapi-volume-13f4eb86-46a9-4e57-8817-4678d4d6d22e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030969501s Apr 16 00:39:15.144: INFO: Pod "downwardapi-volume-13f4eb86-46a9-4e57-8817-4678d4d6d22e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035030333s STEP: Saw pod success Apr 16 00:39:15.144: INFO: Pod "downwardapi-volume-13f4eb86-46a9-4e57-8817-4678d4d6d22e" satisfied condition "Succeeded or Failed" Apr 16 00:39:15.147: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-13f4eb86-46a9-4e57-8817-4678d4d6d22e container client-container: STEP: delete the pod Apr 16 00:39:15.170: INFO: Waiting for pod downwardapi-volume-13f4eb86-46a9-4e57-8817-4678d4d6d22e to disappear Apr 16 00:39:15.185: INFO: Pod downwardapi-volume-13f4eb86-46a9-4e57-8817-4678d4d6d22e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:39:15.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5206" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:39:15.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 00:39:15.760: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 00:39:17.770: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594355, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594355, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594355, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594355, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 00:39:20.801: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:39:30.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8493" for this suite. STEP: Destroying namespace "webhook-8493-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.804 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":209,"skipped":3521,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:39:30.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 00:39:31.736: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 00:39:33.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594371, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594371, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594371, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594371, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 00:39:36.778: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:39:36.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:39:37.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2508" for this suite. STEP: Destroying namespace "webhook-2508-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.018 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":210,"skipped":3525,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:39:38.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 00:39:38.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3920cb4-b206-477e-aa33-77b8588159b0" in namespace "downward-api-5718" to be "Succeeded or Failed" Apr 16 00:39:38.126: INFO: Pod "downwardapi-volume-d3920cb4-b206-477e-aa33-77b8588159b0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.520222ms Apr 16 00:39:40.131: INFO: Pod "downwardapi-volume-d3920cb4-b206-477e-aa33-77b8588159b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008171585s Apr 16 00:39:42.135: INFO: Pod "downwardapi-volume-d3920cb4-b206-477e-aa33-77b8588159b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01257701s STEP: Saw pod success Apr 16 00:39:42.135: INFO: Pod "downwardapi-volume-d3920cb4-b206-477e-aa33-77b8588159b0" satisfied condition "Succeeded or Failed" Apr 16 00:39:42.138: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d3920cb4-b206-477e-aa33-77b8588159b0 container client-container: STEP: delete the pod Apr 16 00:39:42.158: INFO: Waiting for pod downwardapi-volume-d3920cb4-b206-477e-aa33-77b8588159b0 to disappear Apr 16 00:39:42.162: INFO: Pod downwardapi-volume-d3920cb4-b206-477e-aa33-77b8588159b0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:39:42.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5718" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3537,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:39:42.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-4236/configmap-test-4f919b99-4922-4023-84ba-ee425f112c01 STEP: Creating a pod to test consume configMaps Apr 16 00:39:42.241: INFO: Waiting up to 5m0s for pod "pod-configmaps-25e4ef6c-c3dc-4c20-848c-10d48caed5da" in namespace "configmap-4236" to be "Succeeded or Failed" Apr 16 00:39:42.323: INFO: Pod "pod-configmaps-25e4ef6c-c3dc-4c20-848c-10d48caed5da": Phase="Pending", Reason="", readiness=false. Elapsed: 81.484942ms Apr 16 00:39:44.327: INFO: Pod "pod-configmaps-25e4ef6c-c3dc-4c20-848c-10d48caed5da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085709291s Apr 16 00:39:46.331: INFO: Pod "pod-configmaps-25e4ef6c-c3dc-4c20-848c-10d48caed5da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089706443s STEP: Saw pod success Apr 16 00:39:46.331: INFO: Pod "pod-configmaps-25e4ef6c-c3dc-4c20-848c-10d48caed5da" satisfied condition "Succeeded or Failed" Apr 16 00:39:46.334: INFO: Trying to get logs from node latest-worker pod pod-configmaps-25e4ef6c-c3dc-4c20-848c-10d48caed5da container env-test: STEP: delete the pod Apr 16 00:39:46.369: INFO: Waiting for pod pod-configmaps-25e4ef6c-c3dc-4c20-848c-10d48caed5da to disappear Apr 16 00:39:46.401: INFO: Pod pod-configmaps-25e4ef6c-c3dc-4c20-848c-10d48caed5da no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:39:46.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4236" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3562,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:39:46.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-e9e7c173-54b3-45f3-a764-d7e5e5dc3338 STEP: Creating a pod to test consume secrets Apr 16 00:39:46.464: INFO: Waiting up to 5m0s for pod "pod-secrets-ea8ac5dd-cb79-4894-8b10-7c6fa8fbe3e6" in namespace "secrets-3242" to be "Succeeded or Failed" Apr 16 00:39:46.474: INFO: Pod "pod-secrets-ea8ac5dd-cb79-4894-8b10-7c6fa8fbe3e6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.042182ms Apr 16 00:39:48.478: INFO: Pod "pod-secrets-ea8ac5dd-cb79-4894-8b10-7c6fa8fbe3e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013988425s Apr 16 00:39:50.481: INFO: Pod "pod-secrets-ea8ac5dd-cb79-4894-8b10-7c6fa8fbe3e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017665714s STEP: Saw pod success Apr 16 00:39:50.481: INFO: Pod "pod-secrets-ea8ac5dd-cb79-4894-8b10-7c6fa8fbe3e6" satisfied condition "Succeeded or Failed" Apr 16 00:39:50.484: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-ea8ac5dd-cb79-4894-8b10-7c6fa8fbe3e6 container secret-volume-test: STEP: delete the pod Apr 16 00:39:50.500: INFO: Waiting for pod pod-secrets-ea8ac5dd-cb79-4894-8b10-7c6fa8fbe3e6 to disappear Apr 16 00:39:50.504: INFO: Pod pod-secrets-ea8ac5dd-cb79-4894-8b10-7c6fa8fbe3e6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:39:50.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3242" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3565,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:39:50.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 16 00:39:50.584: INFO: Waiting up to 5m0s for pod "pod-cb04b58d-f424-425f-a1bc-55bf723f8f87" in namespace "emptydir-3277" to be "Succeeded or Failed" Apr 16 00:39:50.629: INFO: Pod "pod-cb04b58d-f424-425f-a1bc-55bf723f8f87": Phase="Pending", Reason="", readiness=false. Elapsed: 44.709516ms Apr 16 00:39:52.652: INFO: Pod "pod-cb04b58d-f424-425f-a1bc-55bf723f8f87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068067797s Apr 16 00:39:54.656: INFO: Pod "pod-cb04b58d-f424-425f-a1bc-55bf723f8f87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072194717s STEP: Saw pod success Apr 16 00:39:54.657: INFO: Pod "pod-cb04b58d-f424-425f-a1bc-55bf723f8f87" satisfied condition "Succeeded or Failed" Apr 16 00:39:54.660: INFO: Trying to get logs from node latest-worker2 pod pod-cb04b58d-f424-425f-a1bc-55bf723f8f87 container test-container: STEP: delete the pod Apr 16 00:39:54.709: INFO: Waiting for pod pod-cb04b58d-f424-425f-a1bc-55bf723f8f87 to disappear Apr 16 00:39:54.766: INFO: Pod pod-cb04b58d-f424-425f-a1bc-55bf723f8f87 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:39:54.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3277" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3566,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:39:54.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 16 00:39:54.836: INFO: Waiting up to 5m0s for pod "pod-00f27058-1d88-4d22-8bda-45156c0c0afb" in namespace "emptydir-5392" to be "Succeeded or Failed" Apr 16 00:39:54.851: INFO: Pod "pod-00f27058-1d88-4d22-8bda-45156c0c0afb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.050434ms Apr 16 00:39:56.855: INFO: Pod "pod-00f27058-1d88-4d22-8bda-45156c0c0afb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01921372s Apr 16 00:39:58.858: INFO: Pod "pod-00f27058-1d88-4d22-8bda-45156c0c0afb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022690936s STEP: Saw pod success Apr 16 00:39:58.858: INFO: Pod "pod-00f27058-1d88-4d22-8bda-45156c0c0afb" satisfied condition "Succeeded or Failed" Apr 16 00:39:58.861: INFO: Trying to get logs from node latest-worker pod pod-00f27058-1d88-4d22-8bda-45156c0c0afb container test-container: STEP: delete the pod Apr 16 00:39:58.903: INFO: Waiting for pod pod-00f27058-1d88-4d22-8bda-45156c0c0afb to disappear Apr 16 00:39:58.922: INFO: Pod pod-00f27058-1d88-4d22-8bda-45156c0c0afb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:39:58.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5392" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:39:58.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-fjkm STEP: Creating a pod to test atomic-volume-subpath Apr 16 00:39:59.038: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fjkm" in namespace "subpath-7010" to be "Succeeded or Failed" Apr 16 00:39:59.042: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04043ms Apr 16 00:40:01.047: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008618615s Apr 16 00:40:03.051: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Running", Reason="", readiness=true. Elapsed: 4.012978678s Apr 16 00:40:05.055: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Running", Reason="", readiness=true. Elapsed: 6.01715222s Apr 16 00:40:07.059: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Running", Reason="", readiness=true. Elapsed: 8.021049083s Apr 16 00:40:09.063: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Running", Reason="", readiness=true. Elapsed: 10.024556589s Apr 16 00:40:11.067: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Running", Reason="", readiness=true. Elapsed: 12.028653551s Apr 16 00:40:13.071: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Running", Reason="", readiness=true. Elapsed: 14.03271459s Apr 16 00:40:15.075: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Running", Reason="", readiness=true. Elapsed: 16.036615488s Apr 16 00:40:17.078: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Running", Reason="", readiness=true. Elapsed: 18.040309121s Apr 16 00:40:19.138: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Running", Reason="", readiness=true. Elapsed: 20.100043505s Apr 16 00:40:21.142: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Running", Reason="", readiness=true. Elapsed: 22.103593845s Apr 16 00:40:23.145: INFO: Pod "pod-subpath-test-projected-fjkm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.10728299s STEP: Saw pod success Apr 16 00:40:23.145: INFO: Pod "pod-subpath-test-projected-fjkm" satisfied condition "Succeeded or Failed" Apr 16 00:40:23.162: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-fjkm container test-container-subpath-projected-fjkm: STEP: delete the pod Apr 16 00:40:23.195: INFO: Waiting for pod pod-subpath-test-projected-fjkm to disappear Apr 16 00:40:23.219: INFO: Pod pod-subpath-test-projected-fjkm no longer exists STEP: Deleting pod pod-subpath-test-projected-fjkm Apr 16 00:40:23.219: INFO: Deleting pod "pod-subpath-test-projected-fjkm" in namespace "subpath-7010" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:40:23.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7010" for this suite. • [SLOW TEST:24.296 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":216,"skipped":3604,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:40:23.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4172 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4172 STEP: Creating statefulset with conflicting port in namespace statefulset-4172 STEP: Waiting until pod test-pod will start running in namespace statefulset-4172 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4172 Apr 16 00:40:27.367: INFO: Observed stateful pod in namespace: statefulset-4172, name: ss-0, uid: 0ee8102b-01ea-4fda-b977-1946028b8a59, status phase: Pending. Waiting for statefulset controller to delete. Apr 16 00:40:32.966: INFO: Observed stateful pod in namespace: statefulset-4172, name: ss-0, uid: 0ee8102b-01ea-4fda-b977-1946028b8a59, status phase: Failed. Waiting for statefulset controller to delete. Apr 16 00:40:33.044: INFO: Observed stateful pod in namespace: statefulset-4172, name: ss-0, uid: 0ee8102b-01ea-4fda-b977-1946028b8a59, status phase: Failed. Waiting for statefulset controller to delete. Apr 16 00:40:33.052: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4172 STEP: Removing pod with conflicting port in namespace statefulset-4172 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4172 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 16 00:40:37.142: INFO: Deleting all statefulset in ns statefulset-4172 Apr 16 00:40:37.145: INFO: Scaling statefulset ss to 0 Apr 16 00:40:47.161: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 00:40:47.164: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:40:47.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4172" for this suite. • [SLOW TEST:23.970 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":217,"skipped":3615,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:40:47.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:40:47.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7704" for this suite. STEP: Destroying namespace "nspatchtest-011991db-4f16-468c-931f-1399f90f34e5-9602" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":218,"skipped":3622,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:40:47.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-110328ed-8372-498c-a1ec-04598f262478 STEP: Creating a pod to test consume secrets Apr 16 00:40:47.394: INFO: Waiting up to 5m0s for pod "pod-secrets-9c8eb6fb-9e69-4ef8-8951-79f494745707" in namespace "secrets-6820" to be "Succeeded or Failed" Apr 16 00:40:47.404: INFO: Pod "pod-secrets-9c8eb6fb-9e69-4ef8-8951-79f494745707": Phase="Pending", Reason="", readiness=false. Elapsed: 10.150041ms Apr 16 00:40:49.408: INFO: Pod "pod-secrets-9c8eb6fb-9e69-4ef8-8951-79f494745707": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013768344s Apr 16 00:40:51.412: INFO: Pod "pod-secrets-9c8eb6fb-9e69-4ef8-8951-79f494745707": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017944374s STEP: Saw pod success Apr 16 00:40:51.412: INFO: Pod "pod-secrets-9c8eb6fb-9e69-4ef8-8951-79f494745707" satisfied condition "Succeeded or Failed" Apr 16 00:40:51.415: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-9c8eb6fb-9e69-4ef8-8951-79f494745707 container secret-volume-test: STEP: delete the pod Apr 16 00:40:51.475: INFO: Waiting for pod pod-secrets-9c8eb6fb-9e69-4ef8-8951-79f494745707 to disappear Apr 16 00:40:51.482: INFO: Pod pod-secrets-9c8eb6fb-9e69-4ef8-8951-79f494745707 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:40:51.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6820" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3666,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:40:51.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:41:07.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6039" for this suite. • [SLOW TEST:16.227 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":220,"skipped":3668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:41:07.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:41:18.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3771" for this suite. • [SLOW TEST:11.142 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":221,"skipped":3691,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:41:18.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:41:18.931: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 16 00:41:20.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6344 create -f -' Apr 16 00:41:21.360: INFO: stderr: "" Apr 16 00:41:21.360: INFO: stdout: "e2e-test-crd-publish-openapi-1590-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 16 00:41:21.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6344 delete e2e-test-crd-publish-openapi-1590-crds test-cr' Apr 16 00:41:21.475: INFO: stderr: "" Apr 16 00:41:21.475: INFO: stdout: "e2e-test-crd-publish-openapi-1590-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 16 00:41:21.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6344 apply -f -' Apr 16 00:41:21.737: INFO: stderr: "" Apr 16 00:41:21.737: INFO: stdout: "e2e-test-crd-publish-openapi-1590-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 16 00:41:21.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6344 delete e2e-test-crd-publish-openapi-1590-crds test-cr' Apr 16 00:41:21.848: INFO: stderr: "" Apr 16 00:41:21.848: INFO: stdout: "e2e-test-crd-publish-openapi-1590-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 16 00:41:21.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1590-crds' Apr 16 00:41:22.124: INFO: stderr: "" Apr 16 00:41:22.124: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1590-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:41:25.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6344" for this suite. • [SLOW TEST:6.156 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":222,"skipped":3694,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:41:25.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1117.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1117.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1117.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1117.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1117.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1117.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1117.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1117.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1117.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1117.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 79.154.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.154.79_udp@PTR;check="$$(dig +tcp +noall +answer +search 79.154.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.154.79_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1117.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1117.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1117.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1117.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1117.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1117.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1117.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1117.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1117.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1117.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1117.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 79.154.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.154.79_udp@PTR;check="$$(dig +tcp +noall +answer +search 79.154.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.154.79_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 16 00:41:31.201: INFO: Unable to read wheezy_udp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:31.208: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:31.212: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:31.215: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:31.245: INFO: Unable to read jessie_udp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:31.248: INFO: Unable to read jessie_tcp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:31.250: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:31.252: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:31.268: INFO: Lookups using dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2 failed for: [wheezy_udp@dns-test-service.dns-1117.svc.cluster.local wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local jessie_udp@dns-test-service.dns-1117.svc.cluster.local jessie_tcp@dns-test-service.dns-1117.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local] Apr 16 00:41:36.273: INFO: Unable to read wheezy_udp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:36.277: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:36.280: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:36.283: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:36.304: INFO: Unable to read jessie_udp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:36.307: INFO: Unable to read jessie_tcp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:36.310: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:36.313: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:36.331: INFO: Lookups using dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2 failed for: [wheezy_udp@dns-test-service.dns-1117.svc.cluster.local wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local jessie_udp@dns-test-service.dns-1117.svc.cluster.local jessie_tcp@dns-test-service.dns-1117.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local] Apr 16 00:41:41.272: INFO: Unable to read wheezy_udp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:41.274: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:41.277: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:41.280: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:41.297: INFO: Unable to read jessie_udp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:41.299: INFO: Unable to read jessie_tcp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:41.302: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:41.305: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:41.322: INFO: Lookups using dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2 failed for: [wheezy_udp@dns-test-service.dns-1117.svc.cluster.local wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local jessie_udp@dns-test-service.dns-1117.svc.cluster.local jessie_tcp@dns-test-service.dns-1117.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local] Apr 16 00:41:46.272: INFO: Unable to read wheezy_udp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:46.276: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:46.280: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:46.283: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:46.303: INFO: Unable to read jessie_udp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:46.306: INFO: Unable to read jessie_tcp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:46.309: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:46.312: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:46.330: INFO: Lookups using dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2 failed for: [wheezy_udp@dns-test-service.dns-1117.svc.cluster.local wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local jessie_udp@dns-test-service.dns-1117.svc.cluster.local jessie_tcp@dns-test-service.dns-1117.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local] Apr 16 00:41:51.275: INFO: Unable to read wheezy_udp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:51.279: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:51.281: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:51.284: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:51.303: INFO: Unable to read jessie_udp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:51.306: INFO: Unable to read jessie_tcp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:51.308: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:51.310: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:51.323: INFO: Lookups using dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2 failed for: [wheezy_udp@dns-test-service.dns-1117.svc.cluster.local wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local jessie_udp@dns-test-service.dns-1117.svc.cluster.local jessie_tcp@dns-test-service.dns-1117.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local] Apr 16 00:41:56.272: INFO: Unable to read wheezy_udp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:56.275: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:56.279: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:56.282: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:56.297: INFO: Unable to read jessie_udp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:56.300: INFO: Unable to read jessie_tcp@dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:56.303: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:56.306: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local from pod dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2: the server could not find the requested resource (get pods dns-test-de3a582f-9048-43b7-a252-ef0059addad2) Apr 16 00:41:56.325: INFO: Lookups using dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2 failed for: [wheezy_udp@dns-test-service.dns-1117.svc.cluster.local wheezy_tcp@dns-test-service.dns-1117.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local jessie_udp@dns-test-service.dns-1117.svc.cluster.local jessie_tcp@dns-test-service.dns-1117.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1117.svc.cluster.local] Apr 16 00:42:01.336: INFO: DNS probes using dns-1117/dns-test-de3a582f-9048-43b7-a252-ef0059addad2 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:42:01.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1117" for this suite. • [SLOW TEST:36.983 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":223,"skipped":3701,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:42:01.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:42:02.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2236" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":224,"skipped":3711,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:42:02.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 16 00:42:02.256: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 16 00:42:02.276: INFO: Waiting for terminating namespaces to be deleted... Apr 16 00:42:02.279: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 16 00:42:02.301: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:42:02.301: INFO: Container kindnet-cni ready: true, restart count 0 Apr 16 00:42:02.301: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:42:02.301: INFO: Container kube-proxy ready: true, restart count 0 Apr 16 00:42:02.301: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 16 00:42:02.314: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:42:02.314: INFO: Container kindnet-cni ready: true, restart count 0 Apr 16 00:42:02.314: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:42:02.314: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-affabec1-6e4c-4cde-a90c-4349a08cd35a 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-affabec1-6e4c-4cde-a90c-4349a08cd35a off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-affabec1-6e4c-4cde-a90c-4349a08cd35a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:42:18.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7622" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.530 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":225,"skipped":3724,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:42:18.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-c1e18a62-d64c-4967-9c71-e70642b4a8e6 STEP: Creating a pod to test consume configMaps Apr 16 00:42:18.792: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-92edc8fd-6d24-42c7-b1e6-20b0a0a00781" in namespace "projected-1488" to be "Succeeded or Failed" Apr 16 00:42:18.796: INFO: Pod "pod-projected-configmaps-92edc8fd-6d24-42c7-b1e6-20b0a0a00781": Phase="Pending", Reason="", readiness=false. Elapsed: 3.862866ms Apr 16 00:42:20.800: INFO: Pod "pod-projected-configmaps-92edc8fd-6d24-42c7-b1e6-20b0a0a00781": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007631973s Apr 16 00:42:22.804: INFO: Pod "pod-projected-configmaps-92edc8fd-6d24-42c7-b1e6-20b0a0a00781": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012031241s STEP: Saw pod success Apr 16 00:42:22.804: INFO: Pod "pod-projected-configmaps-92edc8fd-6d24-42c7-b1e6-20b0a0a00781" satisfied condition "Succeeded or Failed" Apr 16 00:42:22.807: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-92edc8fd-6d24-42c7-b1e6-20b0a0a00781 container projected-configmap-volume-test: STEP: delete the pod Apr 16 00:42:22.842: INFO: Waiting for pod pod-projected-configmaps-92edc8fd-6d24-42c7-b1e6-20b0a0a00781 to disappear Apr 16 00:42:22.850: INFO: Pod pod-projected-configmaps-92edc8fd-6d24-42c7-b1e6-20b0a0a00781 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:42:22.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1488" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:42:22.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 16 00:42:23.402: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 00:42:25.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594543, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594543, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594543, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594543, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 16 00:42:28.602: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:42:28.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1731" for this suite. STEP: Destroying namespace "webhook-1731-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.955 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":227,"skipped":3805,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:42:28.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:42:28.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7294" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":228,"skipped":3811,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:42:28.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 16 00:42:28.973: INFO: Waiting up to 5m0s for pod "pod-69e688c7-f7ca-4f73-ba44-5b633f60ce85" in namespace "emptydir-904" to be "Succeeded or Failed" Apr 16 00:42:28.982: INFO: Pod "pod-69e688c7-f7ca-4f73-ba44-5b633f60ce85": Phase="Pending", Reason="", readiness=false. Elapsed: 8.95143ms Apr 16 00:42:31.602: INFO: Pod "pod-69e688c7-f7ca-4f73-ba44-5b633f60ce85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.629139586s Apr 16 00:42:33.606: INFO: Pod "pod-69e688c7-f7ca-4f73-ba44-5b633f60ce85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.632877309s Apr 16 00:42:35.610: INFO: Pod "pod-69e688c7-f7ca-4f73-ba44-5b633f60ce85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.636891973s STEP: Saw pod success Apr 16 00:42:35.610: INFO: Pod "pod-69e688c7-f7ca-4f73-ba44-5b633f60ce85" satisfied condition "Succeeded or Failed" Apr 16 00:42:35.612: INFO: Trying to get logs from node latest-worker pod pod-69e688c7-f7ca-4f73-ba44-5b633f60ce85 container test-container: STEP: delete the pod Apr 16 00:42:35.649: INFO: Waiting for pod pod-69e688c7-f7ca-4f73-ba44-5b633f60ce85 to disappear Apr 16 00:42:35.659: INFO: Pod pod-69e688c7-f7ca-4f73-ba44-5b633f60ce85 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:42:35.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-904" for this suite. • [SLOW TEST:6.799 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3831,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:42:35.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:42:35.780: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-3fd7a1a2-288c-4537-910e-23f0f1547cc4" in namespace "security-context-test-3926" to be "Succeeded or Failed" Apr 16 00:42:35.797: INFO: Pod "alpine-nnp-false-3fd7a1a2-288c-4537-910e-23f0f1547cc4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.57447ms Apr 16 00:42:37.941: INFO: Pod "alpine-nnp-false-3fd7a1a2-288c-4537-910e-23f0f1547cc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160406786s Apr 16 00:42:39.944: INFO: Pod "alpine-nnp-false-3fd7a1a2-288c-4537-910e-23f0f1547cc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.163406542s Apr 16 00:42:39.944: INFO: Pod "alpine-nnp-false-3fd7a1a2-288c-4537-910e-23f0f1547cc4" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:42:39.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3926" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3841,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:42:39.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 16 00:42:40.073: INFO: >>> kubeConfig: /root/.kube/config Apr 16 00:42:42.019: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:42:52.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-710" for this suite. • [SLOW TEST:12.620 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":231,"skipped":3901,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:42:52.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 16 00:42:59.183: INFO: Successfully updated pod "adopt-release-nhc8f" STEP: Checking that the Job readopts the Pod Apr 16 00:42:59.183: INFO: Waiting up to 15m0s for pod "adopt-release-nhc8f" in namespace "job-3703" to be "adopted" Apr 16 00:42:59.198: INFO: Pod "adopt-release-nhc8f": Phase="Running", Reason="", readiness=true. Elapsed: 14.949964ms Apr 16 00:43:01.203: INFO: Pod "adopt-release-nhc8f": Phase="Running", Reason="", readiness=true. Elapsed: 2.019462904s Apr 16 00:43:01.203: INFO: Pod "adopt-release-nhc8f" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 16 00:43:01.711: INFO: Successfully updated pod "adopt-release-nhc8f" STEP: Checking that the Job releases the Pod Apr 16 00:43:01.711: INFO: Waiting up to 15m0s for pod "adopt-release-nhc8f" in namespace "job-3703" to be "released" Apr 16 00:43:01.720: INFO: Pod "adopt-release-nhc8f": Phase="Running", Reason="", readiness=true. Elapsed: 8.531232ms Apr 16 00:43:03.724: INFO: Pod "adopt-release-nhc8f": Phase="Running", Reason="", readiness=true. Elapsed: 2.012543349s Apr 16 00:43:03.724: INFO: Pod "adopt-release-nhc8f" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:43:03.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3703" for this suite. • [SLOW TEST:11.139 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":232,"skipped":3932,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:43:03.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 16 00:43:03.923: INFO: namespace kubectl-3702 Apr 16 00:43:03.923: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3702' Apr 16 00:43:04.229: INFO: stderr: "" Apr 16 00:43:04.229: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 16 00:43:05.234: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:43:05.234: INFO: Found 0 / 1 Apr 16 00:43:06.233: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:43:06.233: INFO: Found 0 / 1 Apr 16 00:43:07.234: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:43:07.234: INFO: Found 1 / 1 Apr 16 00:43:07.234: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 16 00:43:07.237: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:43:07.237: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 16 00:43:07.237: INFO: wait on agnhost-master startup in kubectl-3702 Apr 16 00:43:07.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-tw85p agnhost-master --namespace=kubectl-3702' Apr 16 00:43:07.360: INFO: stderr: "" Apr 16 00:43:07.360: INFO: stdout: "Paused\n" STEP: exposing RC Apr 16 00:43:07.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3702' Apr 16 00:43:07.478: INFO: stderr: "" Apr 16 00:43:07.479: INFO: stdout: "service/rm2 exposed\n" Apr 16 00:43:07.483: INFO: Service rm2 in namespace kubectl-3702 found. STEP: exposing service Apr 16 00:43:09.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3702' Apr 16 00:43:09.617: INFO: stderr: "" Apr 16 00:43:09.617: INFO: stdout: "service/rm3 exposed\n" Apr 16 00:43:09.621: INFO: Service rm3 in namespace kubectl-3702 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:43:11.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3702" for this suite. • [SLOW TEST:7.903 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":233,"skipped":3973,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:43:11.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:43:11.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9366" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":234,"skipped":3999,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:43:11.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:43:15.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4481" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":235,"skipped":4006,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:43:15.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-7ddbf734-cca4-42c1-adb0-e3d551c7c47d in namespace container-probe-8079 Apr 16 00:43:20.039: INFO: Started pod busybox-7ddbf734-cca4-42c1-adb0-e3d551c7c47d in namespace container-probe-8079 STEP: checking the pod's current state and verifying that restartCount is present Apr 16 00:43:20.042: INFO: Initial restart count of pod busybox-7ddbf734-cca4-42c1-adb0-e3d551c7c47d is 0 Apr 16 00:44:12.152: INFO: Restart count of pod container-probe-8079/busybox-7ddbf734-cca4-42c1-adb0-e3d551c7c47d is now 1 (52.109527265s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:44:12.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8079" for this suite. • [SLOW TEST:56.273 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4007,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:44:12.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:44:16.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6240" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4009,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:44:16.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-208802c7-135e-4ac9-ae12-c827956cc461 STEP: Creating a pod to test consume configMaps Apr 16 00:44:16.499: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2eeca578-466d-4c21-95f8-9f1e066590a0" in namespace "projected-3947" to be "Succeeded or Failed" Apr 16 00:44:16.528: INFO: Pod "pod-projected-configmaps-2eeca578-466d-4c21-95f8-9f1e066590a0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.677265ms Apr 16 00:44:18.531: INFO: Pod "pod-projected-configmaps-2eeca578-466d-4c21-95f8-9f1e066590a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032408062s Apr 16 00:44:20.535: INFO: Pod "pod-projected-configmaps-2eeca578-466d-4c21-95f8-9f1e066590a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035833104s STEP: Saw pod success Apr 16 00:44:20.535: INFO: Pod "pod-projected-configmaps-2eeca578-466d-4c21-95f8-9f1e066590a0" satisfied condition "Succeeded or Failed" Apr 16 00:44:20.538: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-2eeca578-466d-4c21-95f8-9f1e066590a0 container projected-configmap-volume-test: STEP: delete the pod Apr 16 00:44:20.578: INFO: Waiting for pod pod-projected-configmaps-2eeca578-466d-4c21-95f8-9f1e066590a0 to disappear Apr 16 00:44:20.593: INFO: Pod pod-projected-configmaps-2eeca578-466d-4c21-95f8-9f1e066590a0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:44:20.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3947" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:44:20.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-2dbd9513-892b-4d93-8b94-1ec8df01b128 STEP: Creating configMap with name cm-test-opt-upd-f840b6ac-95b0-436a-a8bb-9c1b17730394 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2dbd9513-892b-4d93-8b94-1ec8df01b128 STEP: Updating configmap cm-test-opt-upd-f840b6ac-95b0-436a-a8bb-9c1b17730394 STEP: Creating configMap with name cm-test-opt-create-a16d775b-7ec9-4eca-a0db-b68cf1b79ba4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:44:28.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3777" for this suite. • [SLOW TEST:8.239 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4084,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:44:28.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 16 00:44:28.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 16 00:44:29.013: INFO: stderr: "" Apr 16 00:44:29.013: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:44:29.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1770" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":240,"skipped":4128,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:44:29.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:44:29.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6041' Apr 16 00:44:29.288: INFO: stderr: "" Apr 16 00:44:29.288: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 16 00:44:29.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6041' Apr 16 00:44:29.582: INFO: stderr: "" Apr 16 00:44:29.582: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 16 00:44:30.585: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:44:30.585: INFO: Found 0 / 1 Apr 16 00:44:31.591: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:44:31.591: INFO: Found 0 / 1 Apr 16 00:44:32.587: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:44:32.587: INFO: Found 1 / 1 Apr 16 00:44:32.587: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 16 00:44:32.590: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 00:44:32.590: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 16 00:44:32.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-ckd84 --namespace=kubectl-6041' Apr 16 00:44:32.714: INFO: stderr: "" Apr 16 00:44:32.714: INFO: stdout: "Name: agnhost-master-ckd84\nNamespace: kubectl-6041\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Thu, 16 Apr 2020 00:44:29 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.101\nIPs:\n IP: 10.244.2.101\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://13354aea99b526cd1bafb06a92c76e02ac0aee9e9e08af795b607463f2bfa770\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 16 Apr 2020 00:44:31 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-xrh99 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-xrh99:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-xrh99\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-6041/agnhost-master-ckd84 to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" Apr 16 00:44:32.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6041' Apr 16 00:44:32.845: INFO: stderr: "" Apr 16 00:44:32.845: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6041\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-ckd84\n" Apr 16 00:44:32.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6041' Apr 16 00:44:32.946: INFO: stderr: "" Apr 16 00:44:32.946: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6041\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.141.151\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.101:6379\nSession Affinity: None\nEvents: \n" Apr 16 00:44:32.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 16 00:44:33.070: INFO: stderr: "" Apr 16 00:44:33.070: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Thu, 16 Apr 2020 00:44:25 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 16 Apr 2020 00:42:48 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 16 Apr 2020 00:42:48 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 16 Apr 2020 00:42:48 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 16 Apr 2020 00:42:48 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 31d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 31d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 31d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 31d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 31d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 31d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 16 00:44:33.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-6041' Apr 16 00:44:33.190: INFO: stderr: "" Apr 16 00:44:33.190: INFO: stdout: "Name: kubectl-6041\nLabels: e2e-framework=kubectl\n e2e-run=45e19f9f-407c-45a0-8758-e966c88e9b23\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:44:33.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6041" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":241,"skipped":4133,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:44:33.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 16 00:44:37.789: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8849 pod-service-account-658adf72-971e-4654-96a5-bbd0a11cbdc5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 16 00:44:38.013: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8849 pod-service-account-658adf72-971e-4654-96a5-bbd0a11cbdc5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 16 00:44:38.233: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8849 pod-service-account-658adf72-971e-4654-96a5-bbd0a11cbdc5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:44:38.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8849" for this suite. • [SLOW TEST:5.244 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":242,"skipped":4149,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:44:38.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 16 00:44:38.492: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 16 00:44:38.520: INFO: Waiting for terminating namespaces to be deleted... Apr 16 00:44:38.522: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 16 00:44:39.206: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:44:39.206: INFO: Container kindnet-cni ready: true, restart count 0 Apr 16 00:44:39.206: INFO: pod-service-account-658adf72-971e-4654-96a5-bbd0a11cbdc5 from svcaccounts-8849 started at 2020-04-16 00:44:33 +0000 UTC (1 container statuses recorded) Apr 16 00:44:39.206: INFO: Container test ready: true, restart count 0 Apr 16 00:44:39.206: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:44:39.206: INFO: Container kube-proxy ready: true, restart count 0 Apr 16 00:44:39.206: INFO: agnhost-master-ckd84 from kubectl-6041 started at 2020-04-16 00:44:29 +0000 UTC (1 container statuses recorded) Apr 16 00:44:39.206: INFO: Container agnhost-master ready: true, restart count 0 Apr 16 00:44:39.206: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 16 00:44:39.211: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:44:39.211: INFO: Container kindnet-cni ready: true, restart count 0 Apr 16 00:44:39.211: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:44:39.211: INFO: Container kube-proxy ready: true, restart count 0 Apr 16 00:44:39.211: INFO: pod-projected-configmaps-7d2a96ab-0bff-4b8e-8abc-7f8a3d3e6be9 from projected-3777 started at 2020-04-16 00:44:20 +0000 UTC (3 container statuses recorded) Apr 16 00:44:39.211: INFO: Container createcm-volume-test ready: false, restart count 0 Apr 16 00:44:39.211: INFO: Container delcm-volume-test ready: false, restart count 0 Apr 16 00:44:39.211: INFO: Container updcm-volume-test ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 16 00:44:39.961: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 16 00:44:39.961: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 16 00:44:39.961: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 16 00:44:39.961: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker Apr 16 00:44:39.961: INFO: Pod agnhost-master-ckd84 requesting resource cpu=0m on Node latest-worker Apr 16 00:44:39.961: INFO: Pod pod-projected-configmaps-7d2a96ab-0bff-4b8e-8abc-7f8a3d3e6be9 requesting resource cpu=0m on Node latest-worker2 Apr 16 00:44:39.961: INFO: Pod pod-service-account-658adf72-971e-4654-96a5-bbd0a11cbdc5 requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 16 00:44:39.961: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Apr 16 00:44:39.976: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-7848a778-2d7a-4171-8f0c-207eccfa901d.16062629e03d7967], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1266/filler-pod-7848a778-2d7a-4171-8f0c-207eccfa901d to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-7848a778-2d7a-4171-8f0c-207eccfa901d.1606262a2dd415c4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7848a778-2d7a-4171-8f0c-207eccfa901d.1606262a6e720261], Reason = [Created], Message = [Created container filler-pod-7848a778-2d7a-4171-8f0c-207eccfa901d] STEP: Considering event: Type = [Normal], Name = [filler-pod-7848a778-2d7a-4171-8f0c-207eccfa901d.1606262a8df340b5], Reason = [Started], Message = [Started container filler-pod-7848a778-2d7a-4171-8f0c-207eccfa901d] STEP: Considering event: Type = [Normal], Name = [filler-pod-ec00403d-f905-4854-b1f4-b0afc0a4e309.16062629e2966313], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1266/filler-pod-ec00403d-f905-4854-b1f4-b0afc0a4e309 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-ec00403d-f905-4854-b1f4-b0afc0a4e309.1606262a65a1720f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ec00403d-f905-4854-b1f4-b0afc0a4e309.1606262a9c9b7979], Reason = [Created], Message = [Created container filler-pod-ec00403d-f905-4854-b1f4-b0afc0a4e309] STEP: Considering event: Type = [Normal], Name = [filler-pod-ec00403d-f905-4854-b1f4-b0afc0a4e309.1606262aaba7256d], Reason = [Started], Message = [Started container filler-pod-ec00403d-f905-4854-b1f4-b0afc0a4e309] STEP: Considering event: Type = [Warning], Name = [additional-pod.1606262ad37dc528], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1606262ad63218cc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:44:45.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1266" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:6.703 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":243,"skipped":4149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:44:45.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3509, will wait for the garbage collector to delete the pods Apr 16 00:44:49.278: INFO: Deleting Job.batch foo took: 6.584283ms Apr 16 00:44:49.579: INFO: Terminating Job.batch foo pods took: 300.27944ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:45:33.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3509" for this suite. • [SLOW TEST:47.949 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":244,"skipped":4172,"failed":0} SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:45:33.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:45:33.147: INFO: Creating deployment "test-recreate-deployment" Apr 16 00:45:33.157: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 16 00:45:33.227: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 16 00:45:35.233: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 16 00:45:35.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594733, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594733, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594733, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594733, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 00:45:37.290: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 16 00:45:37.297: INFO: Updating deployment test-recreate-deployment Apr 16 00:45:37.297: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 16 00:45:37.595: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3609 /apis/apps/v1/namespaces/deployment-3609/deployments/test-recreate-deployment d92a795a-6e59-4f05-a87a-8f628b29d177 8414278 2 2020-04-16 00:45:33 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005a89c88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-16 00:45:37 +0000 UTC,LastTransitionTime:2020-04-16 00:45:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-16 00:45:37 +0000 UTC,LastTransitionTime:2020-04-16 00:45:33 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 16 00:45:37.599: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3609 /apis/apps/v1/namespaces/deployment-3609/replicasets/test-recreate-deployment-5f94c574ff 6a3bdabd-9585-413d-99d3-69bc643bf3d6 8414275 1 2020-04-16 00:45:37 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment d92a795a-6e59-4f05-a87a-8f628b29d177 0xc002ea80e7 0xc002ea80e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ea8198 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 16 00:45:37.599: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 16 00:45:37.599: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-3609 /apis/apps/v1/namespaces/deployment-3609/replicasets/test-recreate-deployment-846c7dd955 fa50d23a-918c-4fc5-b24b-2651106bd4d2 8414266 2 2020-04-16 00:45:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment d92a795a-6e59-4f05-a87a-8f628b29d177 0xc002ea8377 0xc002ea8378}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ea83e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 16 00:45:37.766: INFO: Pod "test-recreate-deployment-5f94c574ff-hpn56" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-hpn56 test-recreate-deployment-5f94c574ff- deployment-3609 /api/v1/namespaces/deployment-3609/pods/test-recreate-deployment-5f94c574ff-hpn56 e484a001-03df-4bd6-8846-eedf4956aa07 8414277 0 2020-04-16 00:45:37 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 6a3bdabd-9585-413d-99d3-69bc643bf3d6 0xc003dfd1a7 0xc003dfd1a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6bcwv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6bcwv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6bcwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:45:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:45:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:45:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-16 00:45:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-16 00:45:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:45:37.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3609" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":245,"skipped":4175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:45:37.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-52adcd98-d824-4989-8fd8-af37d3a8c416 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-52adcd98-d824-4989-8fd8-af37d3a8c416 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:45:44.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4489" for this suite. • [SLOW TEST:6.272 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:45:44.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3517.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3517.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 16 00:45:50.268: INFO: DNS probes using dns-3517/dns-test-df62e25c-dfc8-42ec-a26c-d95a3de00bf7 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:45:50.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3517" for this suite. • [SLOW TEST:6.265 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":247,"skipped":4253,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:45:50.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 16 00:45:50.399: INFO: Waiting up to 5m0s for pod "var-expansion-6c2d8f28-41b8-4017-8f78-ef01862033b2" in namespace "var-expansion-8408" to be "Succeeded or Failed" Apr 16 00:45:50.403: INFO: Pod "var-expansion-6c2d8f28-41b8-4017-8f78-ef01862033b2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.962409ms Apr 16 00:45:52.407: INFO: Pod "var-expansion-6c2d8f28-41b8-4017-8f78-ef01862033b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00799466s Apr 16 00:45:54.410: INFO: Pod "var-expansion-6c2d8f28-41b8-4017-8f78-ef01862033b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010829257s STEP: Saw pod success Apr 16 00:45:54.410: INFO: Pod "var-expansion-6c2d8f28-41b8-4017-8f78-ef01862033b2" satisfied condition "Succeeded or Failed" Apr 16 00:45:54.412: INFO: Trying to get logs from node latest-worker2 pod var-expansion-6c2d8f28-41b8-4017-8f78-ef01862033b2 container dapi-container: STEP: delete the pod Apr 16 00:45:54.494: INFO: Waiting for pod var-expansion-6c2d8f28-41b8-4017-8f78-ef01862033b2 to disappear Apr 16 00:45:54.571: INFO: Pod var-expansion-6c2d8f28-41b8-4017-8f78-ef01862033b2 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:45:54.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8408" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4257,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:45:54.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:45:54.645: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 16 00:45:57.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1565 create -f -' Apr 16 00:46:00.619: INFO: stderr: "" Apr 16 00:46:00.619: INFO: stdout: "e2e-test-crd-publish-openapi-8157-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 16 00:46:00.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1565 delete e2e-test-crd-publish-openapi-8157-crds test-foo' Apr 16 00:46:00.716: INFO: stderr: "" Apr 16 00:46:00.716: INFO: stdout: "e2e-test-crd-publish-openapi-8157-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 16 00:46:00.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1565 apply -f -' Apr 16 00:46:01.001: INFO: stderr: "" Apr 16 00:46:01.001: INFO: stdout: "e2e-test-crd-publish-openapi-8157-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 16 00:46:01.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1565 delete e2e-test-crd-publish-openapi-8157-crds test-foo' Apr 16 00:46:01.094: INFO: stderr: "" Apr 16 00:46:01.094: INFO: stdout: "e2e-test-crd-publish-openapi-8157-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 16 00:46:01.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1565 create -f -' Apr 16 00:46:01.326: INFO: rc: 1 Apr 16 00:46:01.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1565 apply -f -' Apr 16 00:46:01.557: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 16 00:46:01.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1565 create -f -' Apr 16 00:46:01.840: INFO: rc: 1 Apr 16 00:46:01.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1565 apply -f -' Apr 16 00:46:02.069: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 16 00:46:02.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8157-crds' Apr 16 00:46:02.360: INFO: stderr: "" Apr 16 00:46:02.360: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8157-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 16 00:46:02.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8157-crds.metadata' Apr 16 00:46:02.616: INFO: stderr: "" Apr 16 00:46:02.616: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8157-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 16 00:46:02.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8157-crds.spec' Apr 16 00:46:02.869: INFO: stderr: "" Apr 16 00:46:02.869: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8157-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 16 00:46:02.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8157-crds.spec.bars' Apr 16 00:46:03.103: INFO: stderr: "" Apr 16 00:46:03.103: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8157-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 16 00:46:03.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8157-crds.spec.bars2' Apr 16 00:46:03.358: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:46:06.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1565" for this suite. • [SLOW TEST:11.680 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":249,"skipped":4275,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:46:06.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 16 00:46:06.341: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:46:06.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2693" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":250,"skipped":4277,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:46:06.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:46:10.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3024" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4282,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:46:10.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 16 00:46:10.606: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 16 00:46:10.628: INFO: Waiting for terminating namespaces to be deleted... Apr 16 00:46:10.631: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 16 00:46:10.636: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:46:10.636: INFO: Container kindnet-cni ready: true, restart count 0 Apr 16 00:46:10.636: INFO: busybox-readonly-fs03fcef41-c126-4825-a466-3ada14152b07 from kubelet-test-3024 started at 2020-04-16 00:46:06 +0000 UTC (1 container statuses recorded) Apr 16 00:46:10.636: INFO: Container busybox-readonly-fs03fcef41-c126-4825-a466-3ada14152b07 ready: true, restart count 0 Apr 16 00:46:10.636: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:46:10.636: INFO: Container kube-proxy ready: true, restart count 0 Apr 16 00:46:10.636: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 16 00:46:10.641: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:46:10.641: INFO: Container kindnet-cni ready: true, restart count 0 Apr 16 00:46:10.641: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 16 00:46:10.641: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1606263efd812e55], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:46:11.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9670" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":252,"skipped":4293,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:46:11.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 16 00:46:11.732: INFO: >>> kubeConfig: /root/.kube/config Apr 16 00:46:14.655: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:46:25.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1751" for this suite. • [SLOW TEST:13.502 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":253,"skipped":4334,"failed":0} [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:46:25.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:46:25.268: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 16 00:46:25.276: INFO: Number of nodes with available pods: 0 Apr 16 00:46:25.276: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 16 00:46:25.304: INFO: Number of nodes with available pods: 0 Apr 16 00:46:25.304: INFO: Node latest-worker2 is running more than one daemon pod Apr 16 00:46:26.308: INFO: Number of nodes with available pods: 0 Apr 16 00:46:26.308: INFO: Node latest-worker2 is running more than one daemon pod Apr 16 00:46:27.309: INFO: Number of nodes with available pods: 0 Apr 16 00:46:27.309: INFO: Node latest-worker2 is running more than one daemon pod Apr 16 00:46:28.309: INFO: Number of nodes with available pods: 0 Apr 16 00:46:28.309: INFO: Node latest-worker2 is running more than one daemon pod Apr 16 00:46:29.308: INFO: Number of nodes with available pods: 1 Apr 16 00:46:29.308: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 16 00:46:29.364: INFO: Number of nodes with available pods: 1 Apr 16 00:46:29.364: INFO: Number of running nodes: 0, number of available pods: 1 Apr 16 00:46:30.381: INFO: Number of nodes with available pods: 0 Apr 16 00:46:30.381: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 16 00:46:30.388: INFO: Number of nodes with available pods: 0 Apr 16 00:46:30.388: INFO: Node latest-worker2 is running more than one daemon pod Apr 16 00:46:31.392: INFO: Number of nodes with available pods: 0 Apr 16 00:46:31.392: INFO: Node latest-worker2 is running more than one daemon pod Apr 16 00:46:32.392: INFO: Number of nodes with available pods: 0 Apr 16 00:46:32.392: INFO: Node latest-worker2 is running more than one daemon pod Apr 16 00:46:33.411: INFO: Number of nodes with available pods: 0 Apr 16 00:46:33.411: INFO: Node latest-worker2 is running more than one daemon pod Apr 16 00:46:34.392: INFO: Number of nodes with available pods: 0 Apr 16 00:46:34.392: INFO: Node latest-worker2 is running more than one daemon pod Apr 16 00:46:35.392: INFO: Number of nodes with available pods: 0 Apr 16 00:46:35.392: INFO: Node latest-worker2 is running more than one daemon pod Apr 16 00:46:36.392: INFO: Number of nodes with available pods: 1 Apr 16 00:46:36.392: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-768, will wait for the garbage collector to delete the pods Apr 16 00:46:36.454: INFO: Deleting DaemonSet.extensions daemon-set took: 6.315918ms Apr 16 00:46:36.755: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.237499ms Apr 16 00:46:43.058: INFO: Number of nodes with available pods: 0 Apr 16 00:46:43.058: INFO: Number of running nodes: 0, number of available pods: 0 Apr 16 00:46:43.061: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-768/daemonsets","resourceVersion":"8414750"},"items":null} Apr 16 00:46:43.064: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-768/pods","resourceVersion":"8414750"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:46:43.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-768" for this suite. • [SLOW TEST:17.906 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":254,"skipped":4334,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:46:43.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 16 00:46:43.168: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix628520543/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:46:43.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8824" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":255,"skipped":4335,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:46:43.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 16 00:46:43.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-158fc2a5-ab4f-430b-a4c5-78ad988bffa0" in namespace "downward-api-9880" to be "Succeeded or Failed" Apr 16 00:46:43.322: INFO: Pod "downwardapi-volume-158fc2a5-ab4f-430b-a4c5-78ad988bffa0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.558479ms Apr 16 00:46:45.326: INFO: Pod "downwardapi-volume-158fc2a5-ab4f-430b-a4c5-78ad988bffa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017210464s Apr 16 00:46:47.330: INFO: Pod "downwardapi-volume-158fc2a5-ab4f-430b-a4c5-78ad988bffa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021227931s STEP: Saw pod success Apr 16 00:46:47.330: INFO: Pod "downwardapi-volume-158fc2a5-ab4f-430b-a4c5-78ad988bffa0" satisfied condition "Succeeded or Failed" Apr 16 00:46:47.332: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-158fc2a5-ab4f-430b-a4c5-78ad988bffa0 container client-container: STEP: delete the pod Apr 16 00:46:47.391: INFO: Waiting for pod downwardapi-volume-158fc2a5-ab4f-430b-a4c5-78ad988bffa0 to disappear Apr 16 00:46:47.402: INFO: Pod downwardapi-volume-158fc2a5-ab4f-430b-a4c5-78ad988bffa0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:46:47.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9880" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:46:47.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-3506 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 16 00:46:47.444: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 16 00:46:47.516: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 16 00:46:49.539: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 16 00:46:51.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:46:53.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:46:55.519: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:46:57.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:46:59.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:47:01.519: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:47:03.524: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:47:05.532: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 00:47:07.520: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 16 00:47:07.526: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 16 00:47:11.546: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.108:8080/dial?request=hostname&protocol=http&host=10.244.2.107&port=8080&tries=1'] Namespace:pod-network-test-3506 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:47:11.547: INFO: >>> kubeConfig: /root/.kube/config I0416 00:47:11.580157 7 log.go:172] (0xc0048080b0) (0xc00279a8c0) Create stream I0416 00:47:11.580188 7 log.go:172] (0xc0048080b0) (0xc00279a8c0) Stream added, broadcasting: 1 I0416 00:47:11.582043 7 log.go:172] (0xc0048080b0) Reply frame received for 1 I0416 00:47:11.582099 7 log.go:172] (0xc0048080b0) (0xc001d660a0) Create stream I0416 00:47:11.582116 7 log.go:172] (0xc0048080b0) (0xc001d660a0) Stream added, broadcasting: 3 I0416 00:47:11.583092 7 log.go:172] (0xc0048080b0) Reply frame received for 3 I0416 00:47:11.583132 7 log.go:172] (0xc0048080b0) (0xc002334f00) Create stream I0416 00:47:11.583147 7 log.go:172] (0xc0048080b0) (0xc002334f00) Stream added, broadcasting: 5 I0416 00:47:11.584007 7 log.go:172] (0xc0048080b0) Reply frame received for 5 I0416 00:47:11.662814 7 log.go:172] (0xc0048080b0) Data frame received for 3 I0416 00:47:11.662849 7 log.go:172] (0xc001d660a0) (3) Data frame handling I0416 00:47:11.662872 7 log.go:172] (0xc001d660a0) (3) Data frame sent I0416 00:47:11.663316 7 log.go:172] (0xc0048080b0) Data frame received for 3 I0416 00:47:11.663344 7 log.go:172] (0xc001d660a0) (3) Data frame handling I0416 00:47:11.663402 7 log.go:172] (0xc0048080b0) Data frame received for 5 I0416 00:47:11.663424 7 log.go:172] (0xc002334f00) (5) Data frame handling I0416 00:47:11.664711 7 log.go:172] (0xc0048080b0) Data frame received for 1 I0416 00:47:11.664727 7 log.go:172] (0xc00279a8c0) (1) Data frame handling I0416 00:47:11.664737 7 log.go:172] (0xc00279a8c0) (1) Data frame sent I0416 00:47:11.664753 7 log.go:172] (0xc0048080b0) (0xc00279a8c0) Stream removed, broadcasting: 1 I0416 00:47:11.664765 7 log.go:172] (0xc0048080b0) Go away received I0416 00:47:11.664864 7 log.go:172] (0xc0048080b0) (0xc00279a8c0) Stream removed, broadcasting: 1 I0416 00:47:11.664886 7 log.go:172] (0xc0048080b0) (0xc001d660a0) Stream removed, broadcasting: 3 I0416 00:47:11.664899 7 log.go:172] (0xc0048080b0) (0xc002334f00) Stream removed, broadcasting: 5 Apr 16 00:47:11.664: INFO: Waiting for responses: map[] Apr 16 00:47:11.668: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.108:8080/dial?request=hostname&protocol=http&host=10.244.1.75&port=8080&tries=1'] Namespace:pod-network-test-3506 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 16 00:47:11.668: INFO: >>> kubeConfig: /root/.kube/config I0416 00:47:11.702384 7 log.go:172] (0xc0033e36b0) (0xc00272c500) Create stream I0416 00:47:11.702459 7 log.go:172] (0xc0033e36b0) (0xc00272c500) Stream added, broadcasting: 1 I0416 00:47:11.707220 7 log.go:172] (0xc0033e36b0) Reply frame received for 1 I0416 00:47:11.707291 7 log.go:172] (0xc0033e36b0) (0xc002335040) Create stream I0416 00:47:11.707317 7 log.go:172] (0xc0033e36b0) (0xc002335040) Stream added, broadcasting: 3 I0416 00:47:11.708718 7 log.go:172] (0xc0033e36b0) Reply frame received for 3 I0416 00:47:11.708749 7 log.go:172] (0xc0033e36b0) (0xc001d66140) Create stream I0416 00:47:11.708766 7 log.go:172] (0xc0033e36b0) (0xc001d66140) Stream added, broadcasting: 5 I0416 00:47:11.710101 7 log.go:172] (0xc0033e36b0) Reply frame received for 5 I0416 00:47:11.784218 7 log.go:172] (0xc0033e36b0) Data frame received for 3 I0416 00:47:11.784264 7 log.go:172] (0xc002335040) (3) Data frame handling I0416 00:47:11.784293 7 log.go:172] (0xc002335040) (3) Data frame sent I0416 00:47:11.784386 7 log.go:172] (0xc0033e36b0) Data frame received for 5 I0416 00:47:11.784405 7 log.go:172] (0xc001d66140) (5) Data frame handling I0416 00:47:11.784585 7 log.go:172] (0xc0033e36b0) Data frame received for 3 I0416 00:47:11.784596 7 log.go:172] (0xc002335040) (3) Data frame handling I0416 00:47:11.786144 7 log.go:172] (0xc0033e36b0) Data frame received for 1 I0416 00:47:11.786176 7 log.go:172] (0xc00272c500) (1) Data frame handling I0416 00:47:11.786196 7 log.go:172] (0xc00272c500) (1) Data frame sent I0416 00:47:11.786213 7 log.go:172] (0xc0033e36b0) (0xc00272c500) Stream removed, broadcasting: 1 I0416 00:47:11.786351 7 log.go:172] (0xc0033e36b0) (0xc00272c500) Stream removed, broadcasting: 1 I0416 00:47:11.786414 7 log.go:172] (0xc0033e36b0) (0xc002335040) Stream removed, broadcasting: 3 I0416 00:47:11.786435 7 log.go:172] (0xc0033e36b0) (0xc001d66140) Stream removed, broadcasting: 5 Apr 16 00:47:11.786: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:47:11.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0416 00:47:11.786854 7 log.go:172] (0xc0033e36b0) Go away received STEP: Destroying namespace "pod-network-test-3506" for this suite. • [SLOW TEST:24.385 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4374,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:47:11.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 16 00:47:11.870: INFO: Waiting up to 5m0s for pod "pod-ecf0f835-2ea6-4cbf-99f2-f7d34bca7244" in namespace "emptydir-3935" to be "Succeeded or Failed" Apr 16 00:47:11.873: INFO: Pod "pod-ecf0f835-2ea6-4cbf-99f2-f7d34bca7244": Phase="Pending", Reason="", readiness=false. Elapsed: 2.557516ms Apr 16 00:47:13.877: INFO: Pod "pod-ecf0f835-2ea6-4cbf-99f2-f7d34bca7244": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006387223s Apr 16 00:47:16.096: INFO: Pod "pod-ecf0f835-2ea6-4cbf-99f2-f7d34bca7244": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.226211012s STEP: Saw pod success Apr 16 00:47:16.096: INFO: Pod "pod-ecf0f835-2ea6-4cbf-99f2-f7d34bca7244" satisfied condition "Succeeded or Failed" Apr 16 00:47:16.099: INFO: Trying to get logs from node latest-worker2 pod pod-ecf0f835-2ea6-4cbf-99f2-f7d34bca7244 container test-container: STEP: delete the pod Apr 16 00:47:16.259: INFO: Waiting for pod pod-ecf0f835-2ea6-4cbf-99f2-f7d34bca7244 to disappear Apr 16 00:47:16.278: INFO: Pod pod-ecf0f835-2ea6-4cbf-99f2-f7d34bca7244 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:47:16.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3935" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:47:16.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0416 00:47:26.366732 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 16 00:47:26.366: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:47:26.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-608" for this suite. • [SLOW TEST:10.089 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":259,"skipped":4457,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:47:26.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-2zmb STEP: Creating a pod to test atomic-volume-subpath Apr 16 00:47:26.461: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-2zmb" in namespace "subpath-6389" to be "Succeeded or Failed" Apr 16 00:47:26.465: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.964989ms Apr 16 00:47:28.469: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00783203s Apr 16 00:47:30.472: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Running", Reason="", readiness=true. Elapsed: 4.011062621s Apr 16 00:47:32.476: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Running", Reason="", readiness=true. Elapsed: 6.015310325s Apr 16 00:47:34.481: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Running", Reason="", readiness=true. Elapsed: 8.019630587s Apr 16 00:47:36.485: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Running", Reason="", readiness=true. Elapsed: 10.023715434s Apr 16 00:47:38.489: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Running", Reason="", readiness=true. Elapsed: 12.027864866s Apr 16 00:47:40.494: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Running", Reason="", readiness=true. Elapsed: 14.032333149s Apr 16 00:47:42.498: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Running", Reason="", readiness=true. Elapsed: 16.036888719s Apr 16 00:47:44.502: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Running", Reason="", readiness=true. Elapsed: 18.040348883s Apr 16 00:47:46.506: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Running", Reason="", readiness=true. Elapsed: 20.044720532s Apr 16 00:47:48.510: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Running", Reason="", readiness=true. Elapsed: 22.048752252s Apr 16 00:47:50.514: INFO: Pod "pod-subpath-test-downwardapi-2zmb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053127234s STEP: Saw pod success Apr 16 00:47:50.514: INFO: Pod "pod-subpath-test-downwardapi-2zmb" satisfied condition "Succeeded or Failed" Apr 16 00:47:50.518: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-2zmb container test-container-subpath-downwardapi-2zmb: STEP: delete the pod Apr 16 00:47:50.556: INFO: Waiting for pod pod-subpath-test-downwardapi-2zmb to disappear Apr 16 00:47:50.566: INFO: Pod pod-subpath-test-downwardapi-2zmb no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-2zmb Apr 16 00:47:50.566: INFO: Deleting pod "pod-subpath-test-downwardapi-2zmb" in namespace "subpath-6389" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:47:50.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6389" for this suite. • [SLOW TEST:24.219 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":260,"skipped":4468,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:47:50.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:47:50.689: INFO: Create a RollingUpdate DaemonSet Apr 16 00:47:50.693: INFO: Check that daemon pods launch on every node of the cluster Apr 16 00:47:50.698: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:47:50.703: INFO: Number of nodes with available pods: 0 Apr 16 00:47:50.703: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:47:51.708: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:47:51.711: INFO: Number of nodes with available pods: 0 Apr 16 00:47:51.711: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:47:52.708: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:47:52.711: INFO: Number of nodes with available pods: 0 Apr 16 00:47:52.711: INFO: Node latest-worker is running more than one daemon pod Apr 16 00:47:53.707: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:47:53.709: INFO: Number of nodes with available pods: 1 Apr 16 00:47:53.709: INFO: Node latest-worker2 is running more than one daemon pod Apr 16 00:47:54.709: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:47:54.712: INFO: Number of nodes with available pods: 2 Apr 16 00:47:54.712: INFO: Number of running nodes: 2, number of available pods: 2 Apr 16 00:47:54.712: INFO: Update the DaemonSet to trigger a rollout Apr 16 00:47:54.720: INFO: Updating DaemonSet daemon-set Apr 16 00:48:03.734: INFO: Roll back the DaemonSet before rollout is complete Apr 16 00:48:03.740: INFO: Updating DaemonSet daemon-set Apr 16 00:48:03.740: INFO: Make sure DaemonSet rollback is complete Apr 16 00:48:03.748: INFO: Wrong image for pod: daemon-set-m754c. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 16 00:48:03.748: INFO: Pod daemon-set-m754c is not available Apr 16 00:48:03.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:48:04.759: INFO: Wrong image for pod: daemon-set-m754c. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 16 00:48:04.759: INFO: Pod daemon-set-m754c is not available Apr 16 00:48:04.763: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 16 00:48:05.767: INFO: Pod daemon-set-r64mj is not available Apr 16 00:48:05.770: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6740, will wait for the garbage collector to delete the pods Apr 16 00:48:05.834: INFO: Deleting DaemonSet.extensions daemon-set took: 5.795982ms Apr 16 00:48:06.134: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.248511ms Apr 16 00:48:12.838: INFO: Number of nodes with available pods: 0 Apr 16 00:48:12.838: INFO: Number of running nodes: 0, number of available pods: 0 Apr 16 00:48:12.840: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6740/daemonsets","resourceVersion":"8415305"},"items":null} Apr 16 00:48:12.843: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6740/pods","resourceVersion":"8415305"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:48:12.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6740" for this suite. • [SLOW TEST:22.265 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":261,"skipped":4480,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:48:12.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:48:44.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-729" for this suite. STEP: Destroying namespace "nsdeletetest-2894" for this suite. Apr 16 00:48:44.161: INFO: Namespace nsdeletetest-2894 was already deleted STEP: Destroying namespace "nsdeletetest-6811" for this suite. • [SLOW TEST:31.303 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":262,"skipped":4481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:48:44.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 16 00:48:44.265: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1245 /api/v1/namespaces/watch-1245/configmaps/e2e-watch-test-label-changed 01a68640-3cfc-4c50-8e35-86a6e9d1ff6a 8415464 0 2020-04-16 00:48:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 16 00:48:44.266: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1245 /api/v1/namespaces/watch-1245/configmaps/e2e-watch-test-label-changed 01a68640-3cfc-4c50-8e35-86a6e9d1ff6a 8415465 0 2020-04-16 00:48:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 16 00:48:44.266: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1245 /api/v1/namespaces/watch-1245/configmaps/e2e-watch-test-label-changed 01a68640-3cfc-4c50-8e35-86a6e9d1ff6a 8415466 0 2020-04-16 00:48:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 16 00:48:54.336: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1245 /api/v1/namespaces/watch-1245/configmaps/e2e-watch-test-label-changed 01a68640-3cfc-4c50-8e35-86a6e9d1ff6a 8415506 0 2020-04-16 00:48:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 16 00:48:54.337: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1245 /api/v1/namespaces/watch-1245/configmaps/e2e-watch-test-label-changed 01a68640-3cfc-4c50-8e35-86a6e9d1ff6a 8415507 0 2020-04-16 00:48:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 16 00:48:54.337: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1245 /api/v1/namespaces/watch-1245/configmaps/e2e-watch-test-label-changed 01a68640-3cfc-4c50-8e35-86a6e9d1ff6a 8415508 0 2020-04-16 00:48:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:48:54.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1245" for this suite. • [SLOW TEST:10.180 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":263,"skipped":4511,"failed":0} [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:48:54.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 16 00:48:54.380: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 16 00:48:54.909: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 16 00:48:57.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594934, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594934, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594934, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722594934, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 00:48:59.589: INFO: Waited 516.434351ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:49:00.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1015" for this suite. • [SLOW TEST:5.796 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":264,"skipped":4511,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:49:00.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 16 00:49:00.471: INFO: Waiting up to 5m0s for pod "pod-488aceba-9b04-49ee-94ba-ecc51fb52694" in namespace "emptydir-1537" to be "Succeeded or Failed" Apr 16 00:49:00.479: INFO: Pod "pod-488aceba-9b04-49ee-94ba-ecc51fb52694": Phase="Pending", Reason="", readiness=false. Elapsed: 7.998392ms Apr 16 00:49:02.486: INFO: Pod "pod-488aceba-9b04-49ee-94ba-ecc51fb52694": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015262585s Apr 16 00:49:04.490: INFO: Pod "pod-488aceba-9b04-49ee-94ba-ecc51fb52694": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019438408s STEP: Saw pod success Apr 16 00:49:04.490: INFO: Pod "pod-488aceba-9b04-49ee-94ba-ecc51fb52694" satisfied condition "Succeeded or Failed" Apr 16 00:49:04.493: INFO: Trying to get logs from node latest-worker2 pod pod-488aceba-9b04-49ee-94ba-ecc51fb52694 container test-container: STEP: delete the pod Apr 16 00:49:04.523: INFO: Waiting for pod pod-488aceba-9b04-49ee-94ba-ecc51fb52694 to disappear Apr 16 00:49:04.527: INFO: Pod pod-488aceba-9b04-49ee-94ba-ecc51fb52694 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:49:04.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1537" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4522,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:49:04.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 16 00:49:04.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-32' Apr 16 00:49:04.929: INFO: stderr: "" Apr 16 00:49:04.929: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 16 00:49:04.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-32' Apr 16 00:49:05.046: INFO: stderr: "" Apr 16 00:49:05.046: INFO: stdout: "update-demo-nautilus-7lrgl update-demo-nautilus-znhxb " Apr 16 00:49:05.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7lrgl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-32' Apr 16 00:49:05.151: INFO: stderr: "" Apr 16 00:49:05.151: INFO: stdout: "" Apr 16 00:49:05.151: INFO: update-demo-nautilus-7lrgl is created but not running Apr 16 00:49:10.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-32' Apr 16 00:49:10.262: INFO: stderr: "" Apr 16 00:49:10.262: INFO: stdout: "update-demo-nautilus-7lrgl update-demo-nautilus-znhxb " Apr 16 00:49:10.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7lrgl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-32' Apr 16 00:49:10.350: INFO: stderr: "" Apr 16 00:49:10.350: INFO: stdout: "true" Apr 16 00:49:10.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7lrgl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-32' Apr 16 00:49:10.437: INFO: stderr: "" Apr 16 00:49:10.437: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 16 00:49:10.437: INFO: validating pod update-demo-nautilus-7lrgl Apr 16 00:49:10.441: INFO: got data: { "image": "nautilus.jpg" } Apr 16 00:49:10.441: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 16 00:49:10.441: INFO: update-demo-nautilus-7lrgl is verified up and running Apr 16 00:49:10.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znhxb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-32' Apr 16 00:49:10.541: INFO: stderr: "" Apr 16 00:49:10.541: INFO: stdout: "true" Apr 16 00:49:10.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znhxb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-32' Apr 16 00:49:10.646: INFO: stderr: "" Apr 16 00:49:10.646: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 16 00:49:10.646: INFO: validating pod update-demo-nautilus-znhxb Apr 16 00:49:10.651: INFO: got data: { "image": "nautilus.jpg" } Apr 16 00:49:10.651: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 16 00:49:10.651: INFO: update-demo-nautilus-znhxb is verified up and running STEP: using delete to clean up resources Apr 16 00:49:10.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-32' Apr 16 00:49:10.750: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 16 00:49:10.751: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 16 00:49:10.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-32' Apr 16 00:49:10.847: INFO: stderr: "No resources found in kubectl-32 namespace.\n" Apr 16 00:49:10.847: INFO: stdout: "" Apr 16 00:49:10.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-32 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 16 00:49:10.989: INFO: stderr: "" Apr 16 00:49:10.989: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:49:10.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-32" for this suite. • [SLOW TEST:6.445 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":266,"skipped":4536,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:49:10.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-7c622d26-ea37-415a-aadc-c1d76c54e070 STEP: Creating a pod to test consume configMaps Apr 16 00:49:11.337: INFO: Waiting up to 5m0s for pod "pod-configmaps-0faf9cca-f285-40e4-a336-8909bd88aa5b" in namespace "configmap-6124" to be "Succeeded or Failed" Apr 16 00:49:11.339: INFO: Pod "pod-configmaps-0faf9cca-f285-40e4-a336-8909bd88aa5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.599592ms Apr 16 00:49:13.348: INFO: Pod "pod-configmaps-0faf9cca-f285-40e4-a336-8909bd88aa5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011460908s Apr 16 00:49:15.352: INFO: Pod "pod-configmaps-0faf9cca-f285-40e4-a336-8909bd88aa5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015139054s STEP: Saw pod success Apr 16 00:49:15.352: INFO: Pod "pod-configmaps-0faf9cca-f285-40e4-a336-8909bd88aa5b" satisfied condition "Succeeded or Failed" Apr 16 00:49:15.354: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0faf9cca-f285-40e4-a336-8909bd88aa5b container configmap-volume-test: STEP: delete the pod Apr 16 00:49:15.424: INFO: Waiting for pod pod-configmaps-0faf9cca-f285-40e4-a336-8909bd88aa5b to disappear Apr 16 00:49:15.431: INFO: Pod pod-configmaps-0faf9cca-f285-40e4-a336-8909bd88aa5b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:49:15.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6124" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4537,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:49:15.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 16 00:49:15.534: INFO: Waiting up to 5m0s for pod "client-containers-286edc9e-dbc2-4137-bedd-664bcc0f2751" in namespace "containers-9472" to be "Succeeded or Failed" Apr 16 00:49:15.607: INFO: Pod "client-containers-286edc9e-dbc2-4137-bedd-664bcc0f2751": Phase="Pending", Reason="", readiness=false. Elapsed: 72.546762ms Apr 16 00:49:17.610: INFO: Pod "client-containers-286edc9e-dbc2-4137-bedd-664bcc0f2751": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076464677s Apr 16 00:49:19.618: INFO: Pod "client-containers-286edc9e-dbc2-4137-bedd-664bcc0f2751": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084050832s STEP: Saw pod success Apr 16 00:49:19.618: INFO: Pod "client-containers-286edc9e-dbc2-4137-bedd-664bcc0f2751" satisfied condition "Succeeded or Failed" Apr 16 00:49:19.622: INFO: Trying to get logs from node latest-worker2 pod client-containers-286edc9e-dbc2-4137-bedd-664bcc0f2751 container test-container: STEP: delete the pod Apr 16 00:49:19.647: INFO: Waiting for pod client-containers-286edc9e-dbc2-4137-bedd-664bcc0f2751 to disappear Apr 16 00:49:19.650: INFO: Pod client-containers-286edc9e-dbc2-4137-bedd-664bcc0f2751 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:49:19.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9472" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4557,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:49:19.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6182.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6182.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6182.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6182.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6182.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6182.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 16 00:49:25.849: INFO: DNS probes using dns-6182/dns-test-a8e28239-a6e7-4412-8d30-465d29063c63 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:49:25.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6182" for this suite. • [SLOW TEST:6.307 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":269,"skipped":4567,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:49:25.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:49:26.026: INFO: Waiting up to 5m0s for pod "busybox-user-65534-02ec4037-ac9e-4e81-9fd7-9d9bc3b5747f" in namespace "security-context-test-2871" to be "Succeeded or Failed" Apr 16 00:49:26.223: INFO: Pod "busybox-user-65534-02ec4037-ac9e-4e81-9fd7-9d9bc3b5747f": Phase="Pending", Reason="", readiness=false. Elapsed: 197.377661ms Apr 16 00:49:28.228: INFO: Pod "busybox-user-65534-02ec4037-ac9e-4e81-9fd7-9d9bc3b5747f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201839158s Apr 16 00:49:30.232: INFO: Pod "busybox-user-65534-02ec4037-ac9e-4e81-9fd7-9d9bc3b5747f": Phase="Running", Reason="", readiness=true. Elapsed: 4.206192879s Apr 16 00:49:32.236: INFO: Pod "busybox-user-65534-02ec4037-ac9e-4e81-9fd7-9d9bc3b5747f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209997984s Apr 16 00:49:32.236: INFO: Pod "busybox-user-65534-02ec4037-ac9e-4e81-9fd7-9d9bc3b5747f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:49:32.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2871" for this suite. • [SLOW TEST:6.280 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4619,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:49:32.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 16 00:49:32.351: INFO: Waiting up to 5m0s for pod "client-containers-a5d78fb8-51b1-43bc-9178-d18ae53eb83f" in namespace "containers-4524" to be "Succeeded or Failed" Apr 16 00:49:32.360: INFO: Pod "client-containers-a5d78fb8-51b1-43bc-9178-d18ae53eb83f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.977559ms Apr 16 00:49:34.363: INFO: Pod "client-containers-a5d78fb8-51b1-43bc-9178-d18ae53eb83f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012443635s Apr 16 00:49:36.367: INFO: Pod "client-containers-a5d78fb8-51b1-43bc-9178-d18ae53eb83f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016114848s STEP: Saw pod success Apr 16 00:49:36.367: INFO: Pod "client-containers-a5d78fb8-51b1-43bc-9178-d18ae53eb83f" satisfied condition "Succeeded or Failed" Apr 16 00:49:36.370: INFO: Trying to get logs from node latest-worker pod client-containers-a5d78fb8-51b1-43bc-9178-d18ae53eb83f container test-container: STEP: delete the pod Apr 16 00:49:36.402: INFO: Waiting for pod client-containers-a5d78fb8-51b1-43bc-9178-d18ae53eb83f to disappear Apr 16 00:49:36.414: INFO: Pod client-containers-a5d78fb8-51b1-43bc-9178-d18ae53eb83f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:49:36.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4524" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:49:36.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-33eaaf0f-f435-4154-a130-60a7c52c9a4c in namespace container-probe-7828 Apr 16 00:49:40.502: INFO: Started pod liveness-33eaaf0f-f435-4154-a130-60a7c52c9a4c in namespace container-probe-7828 STEP: checking the pod's current state and verifying that restartCount is present Apr 16 00:49:40.505: INFO: Initial restart count of pod liveness-33eaaf0f-f435-4154-a130-60a7c52c9a4c is 0 Apr 16 00:49:56.551: INFO: Restart count of pod container-probe-7828/liveness-33eaaf0f-f435-4154-a130-60a7c52c9a4c is now 1 (16.045823691s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:49:56.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7828" for this suite. • [SLOW TEST:20.175 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4667,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:49:56.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-350e4790-a561-4b45-b1d5-b01399d2495c STEP: Creating a pod to test consume secrets Apr 16 00:49:56.685: INFO: Waiting up to 5m0s for pod "pod-secrets-f6105083-fb3b-4e09-a2b5-a7e48f79bb59" in namespace "secrets-3664" to be "Succeeded or Failed" Apr 16 00:49:56.691: INFO: Pod "pod-secrets-f6105083-fb3b-4e09-a2b5-a7e48f79bb59": Phase="Pending", Reason="", readiness=false. Elapsed: 5.941874ms Apr 16 00:49:58.837: INFO: Pod "pod-secrets-f6105083-fb3b-4e09-a2b5-a7e48f79bb59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1518121s Apr 16 00:50:00.841: INFO: Pod "pod-secrets-f6105083-fb3b-4e09-a2b5-a7e48f79bb59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156370488s Apr 16 00:50:02.845: INFO: Pod "pod-secrets-f6105083-fb3b-4e09-a2b5-a7e48f79bb59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160666043s STEP: Saw pod success Apr 16 00:50:02.845: INFO: Pod "pod-secrets-f6105083-fb3b-4e09-a2b5-a7e48f79bb59" satisfied condition "Succeeded or Failed" Apr 16 00:50:02.849: INFO: Trying to get logs from node latest-worker pod pod-secrets-f6105083-fb3b-4e09-a2b5-a7e48f79bb59 container secret-volume-test: STEP: delete the pod Apr 16 00:50:02.870: INFO: Waiting for pod pod-secrets-f6105083-fb3b-4e09-a2b5-a7e48f79bb59 to disappear Apr 16 00:50:02.882: INFO: Pod pod-secrets-f6105083-fb3b-4e09-a2b5-a7e48f79bb59 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:50:02.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3664" for this suite. • [SLOW TEST:6.303 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4670,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:50:02.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 16 00:50:03.027: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-a762148f-5374-4bbd-9c26-7da2cd039bc7" in namespace "security-context-test-7550" to be "Succeeded or Failed" Apr 16 00:50:03.030: INFO: Pod "busybox-readonly-false-a762148f-5374-4bbd-9c26-7da2cd039bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.998613ms Apr 16 00:50:05.035: INFO: Pod "busybox-readonly-false-a762148f-5374-4bbd-9c26-7da2cd039bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007317731s Apr 16 00:50:07.038: INFO: Pod "busybox-readonly-false-a762148f-5374-4bbd-9c26-7da2cd039bc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011021332s Apr 16 00:50:07.038: INFO: Pod "busybox-readonly-false-a762148f-5374-4bbd-9c26-7da2cd039bc7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:50:07.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7550" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4699,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 16 00:50:07.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-9e9bdd1b-e672-418e-8f12-b46b4407ee58 in namespace container-probe-7539 Apr 16 00:50:11.131: INFO: Started pod busybox-9e9bdd1b-e672-418e-8f12-b46b4407ee58 in namespace container-probe-7539 STEP: checking the pod's current state and verifying that restartCount is present Apr 16 00:50:11.134: INFO: Initial restart count of pod busybox-9e9bdd1b-e672-418e-8f12-b46b4407ee58 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 16 00:54:12.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7539" for this suite. • [SLOW TEST:245.629 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4701,"failed":0} SSSSSSSSSSSSSSSSApr 16 00:54:12.678: INFO: Running AfterSuite actions on all nodes Apr 16 00:54:12.678: INFO: Running AfterSuite actions on node 1 Apr 16 00:54:12.678: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4667.583 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS