I0320 21:06:52.060649 7 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0320 21:06:52.060903 7 e2e.go:109] Starting e2e run "4059d5cd-aab2-469a-a920-f1d32f0e9d4f" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584738410 - Will randomize all specs Will run 278 of 4843 specs Mar 20 21:06:52.111: INFO: >>> kubeConfig: /root/.kube/config Mar 20 21:06:52.116: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 20 21:06:52.144: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 20 21:06:52.175: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 20 21:06:52.175: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 20 21:06:52.175: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 20 21:06:52.182: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 20 21:06:52.182: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 20 21:06:52.182: INFO: e2e test version: v1.17.3 Mar 20 21:06:52.182: INFO: kube-apiserver version: v1.17.2 Mar 20 21:06:52.182: INFO: >>> kubeConfig: /root/.kube/config Mar 20 21:06:52.186: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:06:52.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Mar 20 21:06:52.273: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-c05ef395-f790-4279-bc13-857589a503a9 STEP: Creating configMap with name cm-test-opt-upd-7171eedf-2a36-4419-9143-2047616afc6c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c05ef395-f790-4279-bc13-857589a503a9 STEP: Updating configmap cm-test-opt-upd-7171eedf-2a36-4419-9143-2047616afc6c STEP: Creating configMap with name cm-test-opt-create-c6e19a38-b64d-44fa-b7b6-cefac652cc49 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:08:20.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-411" for this suite. • [SLOW TEST:88.728 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":12,"failed":0} SSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:08:20.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 20 21:08:21.506: INFO: created pod pod-service-account-defaultsa Mar 20 21:08:21.506: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 20 21:08:21.513: INFO: created pod pod-service-account-mountsa Mar 20 21:08:21.513: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 20 21:08:21.578: INFO: created pod pod-service-account-nomountsa Mar 20 21:08:21.579: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 20 21:08:21.603: INFO: created pod pod-service-account-defaultsa-mountspec Mar 20 21:08:21.603: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 20 21:08:21.648: INFO: created pod pod-service-account-mountsa-mountspec Mar 20 21:08:21.648: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 20 21:08:21.678: INFO: created pod pod-service-account-nomountsa-mountspec Mar 20 21:08:21.678: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 20 21:08:21.735: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 20 21:08:21.735: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 20 21:08:21.750: INFO: created pod pod-service-account-mountsa-nomountspec Mar 20 21:08:21.750: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 20 21:08:21.781: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 20 21:08:21.781: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:08:21.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4817" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":2,"skipped":18,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:08:21.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 21:08:22.630: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 21:08:24.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335303, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:08:26.668: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335303, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:08:28.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335303, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:08:30.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335303, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:08:32.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335303, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335302, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 21:08:35.674: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:08:35.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7965-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:08:36.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-928" for this suite. STEP: Destroying namespace "webhook-928-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.491 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":3,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:08:36.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7281 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7281;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7281 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7281;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7281.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7281.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7281.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7281.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7281.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7281.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7281.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7281.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7281.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7281.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7281.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 192.173.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.173.192_udp@PTR;check="$$(dig +tcp +noall +answer +search 192.173.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.173.192_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7281 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7281;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7281 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7281;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7281.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7281.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7281.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7281.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7281.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7281.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7281.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7281.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7281.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7281.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7281.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7281.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 192.173.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.173.192_udp@PTR;check="$$(dig +tcp +noall +answer +search 192.173.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.173.192_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 20 21:08:42.545: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.549: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.552: INFO: Unable to read wheezy_udp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.556: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.558: INFO: Unable to read wheezy_udp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.561: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.564: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.567: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.588: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.591: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.595: INFO: Unable to read jessie_udp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.598: INFO: Unable to read jessie_tcp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.602: INFO: Unable to read jessie_udp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.606: INFO: Unable to read jessie_tcp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.609: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.612: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:42.632: INFO: Lookups using dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7281 wheezy_tcp@dns-test-service.dns-7281 wheezy_udp@dns-test-service.dns-7281.svc wheezy_tcp@dns-test-service.dns-7281.svc wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7281 jessie_tcp@dns-test-service.dns-7281 jessie_udp@dns-test-service.dns-7281.svc jessie_tcp@dns-test-service.dns-7281.svc jessie_udp@_http._tcp.dns-test-service.dns-7281.svc jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc] Mar 20 21:08:47.636: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.639: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.643: INFO: Unable to read wheezy_udp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.646: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.649: INFO: Unable to read wheezy_udp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.652: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.655: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.659: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.681: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.684: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.687: INFO: Unable to read jessie_udp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.691: INFO: Unable to read jessie_tcp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.694: INFO: Unable to read jessie_udp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.698: INFO: Unable to read jessie_tcp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.701: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.704: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:47.734: INFO: Lookups using dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7281 wheezy_tcp@dns-test-service.dns-7281 wheezy_udp@dns-test-service.dns-7281.svc wheezy_tcp@dns-test-service.dns-7281.svc wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7281 jessie_tcp@dns-test-service.dns-7281 jessie_udp@dns-test-service.dns-7281.svc jessie_tcp@dns-test-service.dns-7281.svc jessie_udp@_http._tcp.dns-test-service.dns-7281.svc jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc] Mar 20 21:08:52.639: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.642: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.644: INFO: Unable to read wheezy_udp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.647: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.648: INFO: Unable to read wheezy_udp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.653: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.656: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.673: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.676: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.679: INFO: Unable to read jessie_udp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.682: INFO: Unable to read jessie_tcp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.685: INFO: Unable to read jessie_udp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.688: INFO: Unable to read jessie_tcp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.691: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.694: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:52.711: INFO: Lookups using dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7281 wheezy_tcp@dns-test-service.dns-7281 wheezy_udp@dns-test-service.dns-7281.svc wheezy_tcp@dns-test-service.dns-7281.svc wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7281 jessie_tcp@dns-test-service.dns-7281 jessie_udp@dns-test-service.dns-7281.svc jessie_tcp@dns-test-service.dns-7281.svc jessie_udp@_http._tcp.dns-test-service.dns-7281.svc jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc] Mar 20 21:08:57.637: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.641: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.645: INFO: Unable to read wheezy_udp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.648: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.651: INFO: Unable to read wheezy_udp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.655: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.658: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.660: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.690: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.706: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.725: INFO: Unable to read jessie_udp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.727: INFO: Unable to read jessie_tcp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.729: INFO: Unable to read jessie_udp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.731: INFO: Unable to read jessie_tcp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.735: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.737: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:08:57.749: INFO: Lookups using dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7281 wheezy_tcp@dns-test-service.dns-7281 wheezy_udp@dns-test-service.dns-7281.svc wheezy_tcp@dns-test-service.dns-7281.svc wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7281 jessie_tcp@dns-test-service.dns-7281 jessie_udp@dns-test-service.dns-7281.svc jessie_tcp@dns-test-service.dns-7281.svc jessie_udp@_http._tcp.dns-test-service.dns-7281.svc jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc] Mar 20 21:09:02.636: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.640: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.643: INFO: Unable to read wheezy_udp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.647: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.651: INFO: Unable to read wheezy_udp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.654: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.657: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.660: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.681: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.684: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.687: INFO: Unable to read jessie_udp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.690: INFO: Unable to read jessie_tcp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.693: INFO: Unable to read jessie_udp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.696: INFO: Unable to read jessie_tcp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.699: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.703: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:02.727: INFO: Lookups using dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7281 wheezy_tcp@dns-test-service.dns-7281 wheezy_udp@dns-test-service.dns-7281.svc wheezy_tcp@dns-test-service.dns-7281.svc wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7281 jessie_tcp@dns-test-service.dns-7281 jessie_udp@dns-test-service.dns-7281.svc jessie_tcp@dns-test-service.dns-7281.svc jessie_udp@_http._tcp.dns-test-service.dns-7281.svc jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc] Mar 20 21:09:07.639: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.642: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.646: INFO: Unable to read wheezy_udp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.648: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.651: INFO: Unable to read wheezy_udp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.655: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.658: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.661: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.683: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.686: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.689: INFO: Unable to read jessie_udp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.692: INFO: Unable to read jessie_tcp@dns-test-service.dns-7281 from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.695: INFO: Unable to read jessie_udp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.698: INFO: Unable to read jessie_tcp@dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.701: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.709: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc from pod dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203: the server could not find the requested resource (get pods dns-test-96f95359-328f-47df-90f9-7b0d568ea203) Mar 20 21:09:07.724: INFO: Lookups using dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7281 wheezy_tcp@dns-test-service.dns-7281 wheezy_udp@dns-test-service.dns-7281.svc wheezy_tcp@dns-test-service.dns-7281.svc wheezy_udp@_http._tcp.dns-test-service.dns-7281.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7281.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7281 jessie_tcp@dns-test-service.dns-7281 jessie_udp@dns-test-service.dns-7281.svc jessie_tcp@dns-test-service.dns-7281.svc jessie_udp@_http._tcp.dns-test-service.dns-7281.svc jessie_tcp@_http._tcp.dns-test-service.dns-7281.svc] Mar 20 21:09:12.716: INFO: DNS probes using dns-7281/dns-test-96f95359-328f-47df-90f9-7b0d568ea203 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:09:12.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7281" for this suite. • [SLOW TEST:36.793 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":4,"skipped":76,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:09:13.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:09:13.311: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-a9f27aca-f568-4837-a7c1-a440f81c6bce" in namespace "security-context-test-489" to be "success or failure" Mar 20 21:09:13.331: INFO: Pod "busybox-privileged-false-a9f27aca-f568-4837-a7c1-a440f81c6bce": Phase="Pending", Reason="", readiness=false. Elapsed: 19.494471ms Mar 20 21:09:15.335: INFO: Pod "busybox-privileged-false-a9f27aca-f568-4837-a7c1-a440f81c6bce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023600338s Mar 20 21:09:17.339: INFO: Pod "busybox-privileged-false-a9f27aca-f568-4837-a7c1-a440f81c6bce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027828741s Mar 20 21:09:17.339: INFO: Pod "busybox-privileged-false-a9f27aca-f568-4837-a7c1-a440f81c6bce" satisfied condition "success or failure" Mar 20 21:09:17.345: INFO: Got logs for pod "busybox-privileged-false-a9f27aca-f568-4837-a7c1-a440f81c6bce": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:09:17.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-489" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":91,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:09:17.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:09:21.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7637" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":106,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:09:21.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:09:21.520: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 20 21:09:21.533: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:21.538: INFO: Number of nodes with available pods: 0 Mar 20 21:09:21.538: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:09:22.541: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:22.544: INFO: Number of nodes with available pods: 0 Mar 20 21:09:22.544: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:09:23.543: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:23.546: INFO: Number of nodes with available pods: 0 Mar 20 21:09:23.546: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:09:24.543: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:24.546: INFO: Number of nodes with available pods: 0 Mar 20 21:09:24.546: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:09:25.543: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:25.547: INFO: Number of nodes with available pods: 2 Mar 20 21:09:25.547: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 20 21:09:25.591: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:25.591: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:25.594: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:26.599: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:26.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:26.605: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:27.633: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:27.633: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:27.636: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:28.621: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:28.621: INFO: Pod daemon-set-h9mxv is not available Mar 20 21:09:28.621: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:28.625: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:29.599: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:29.599: INFO: Pod daemon-set-h9mxv is not available Mar 20 21:09:29.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:29.604: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:30.599: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:30.599: INFO: Pod daemon-set-h9mxv is not available Mar 20 21:09:30.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:30.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:31.599: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:31.599: INFO: Pod daemon-set-h9mxv is not available Mar 20 21:09:31.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:31.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:32.599: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:32.599: INFO: Pod daemon-set-h9mxv is not available Mar 20 21:09:32.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:32.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:33.599: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:33.599: INFO: Pod daemon-set-h9mxv is not available Mar 20 21:09:33.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:33.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:34.599: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:34.599: INFO: Pod daemon-set-h9mxv is not available Mar 20 21:09:34.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:34.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:35.600: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:35.600: INFO: Pod daemon-set-h9mxv is not available Mar 20 21:09:35.600: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:35.604: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:36.599: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:36.599: INFO: Pod daemon-set-h9mxv is not available Mar 20 21:09:36.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:36.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:37.599: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:37.599: INFO: Pod daemon-set-h9mxv is not available Mar 20 21:09:37.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:37.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:38.598: INFO: Wrong image for pod: daemon-set-h9mxv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:38.598: INFO: Pod daemon-set-h9mxv is not available Mar 20 21:09:38.598: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:38.627: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:39.598: INFO: Pod daemon-set-2bdq7 is not available Mar 20 21:09:39.598: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:39.601: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:40.602: INFO: Pod daemon-set-2bdq7 is not available Mar 20 21:09:40.602: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:40.606: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:41.607: INFO: Pod daemon-set-2bdq7 is not available Mar 20 21:09:41.607: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:41.611: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:42.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:42.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:43.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:43.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:44.603: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:44.603: INFO: Pod daemon-set-x2fws is not available Mar 20 21:09:44.607: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:45.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:45.599: INFO: Pod daemon-set-x2fws is not available Mar 20 21:09:45.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:46.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:46.599: INFO: Pod daemon-set-x2fws is not available Mar 20 21:09:46.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:47.599: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:47.599: INFO: Pod daemon-set-x2fws is not available Mar 20 21:09:47.622: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:48.598: INFO: Wrong image for pod: daemon-set-x2fws. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 20 21:09:48.598: INFO: Pod daemon-set-x2fws is not available Mar 20 21:09:48.601: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:49.599: INFO: Pod daemon-set-dfgnc is not available Mar 20 21:09:49.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 20 21:09:49.606: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:49.608: INFO: Number of nodes with available pods: 1 Mar 20 21:09:49.608: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:09:50.682: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:50.685: INFO: Number of nodes with available pods: 1 Mar 20 21:09:50.685: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:09:51.640: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:51.669: INFO: Number of nodes with available pods: 1 Mar 20 21:09:51.669: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:09:52.613: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:09:52.616: INFO: Number of nodes with available pods: 2 Mar 20 21:09:52.616: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6465, will wait for the garbage collector to delete the pods Mar 20 21:09:52.690: INFO: Deleting DaemonSet.extensions daemon-set took: 5.182416ms Mar 20 21:09:52.990: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.267925ms Mar 20 21:09:59.493: INFO: Number of nodes with available pods: 0 Mar 20 21:09:59.493: INFO: Number of running nodes: 0, number of available pods: 0 Mar 20 21:09:59.496: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6465/daemonsets","resourceVersion":"1374724"},"items":null} Mar 20 21:09:59.499: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6465/pods","resourceVersion":"1374724"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:09:59.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6465" for this suite. • [SLOW TEST:38.096 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":7,"skipped":114,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:09:59.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-8520c4cb-03ae-4ec5-9cd4-8ec4deba38ac STEP: Creating a pod to test consume configMaps Mar 20 21:09:59.605: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7267b40-1748-4c94-a5c1-aa7353e31e3a" in namespace "configmap-1204" to be "success or failure" Mar 20 21:09:59.623: INFO: Pod "pod-configmaps-d7267b40-1748-4c94-a5c1-aa7353e31e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.892234ms Mar 20 21:10:01.662: INFO: Pod "pod-configmaps-d7267b40-1748-4c94-a5c1-aa7353e31e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056612128s Mar 20 21:10:03.666: INFO: Pod "pod-configmaps-d7267b40-1748-4c94-a5c1-aa7353e31e3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060558891s STEP: Saw pod success Mar 20 21:10:03.666: INFO: Pod "pod-configmaps-d7267b40-1748-4c94-a5c1-aa7353e31e3a" satisfied condition "success or failure" Mar 20 21:10:03.669: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-d7267b40-1748-4c94-a5c1-aa7353e31e3a container configmap-volume-test: STEP: delete the pod Mar 20 21:10:03.685: INFO: Waiting for pod pod-configmaps-d7267b40-1748-4c94-a5c1-aa7353e31e3a to disappear Mar 20 21:10:03.691: INFO: Pod pod-configmaps-d7267b40-1748-4c94-a5c1-aa7353e31e3a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:10:03.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1204" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":114,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:10:03.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f2621cb1-0735-4d72-af23-0772b8187a78 STEP: Creating a pod to test consume secrets Mar 20 21:10:03.776: INFO: Waiting up to 5m0s for pod "pod-secrets-4dde50fd-7b07-49e9-8a93-82a7f3ccd54a" in namespace "secrets-290" to be "success or failure" Mar 20 21:10:03.790: INFO: Pod "pod-secrets-4dde50fd-7b07-49e9-8a93-82a7f3ccd54a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.478294ms Mar 20 21:10:05.794: INFO: Pod "pod-secrets-4dde50fd-7b07-49e9-8a93-82a7f3ccd54a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017721715s Mar 20 21:10:07.799: INFO: Pod "pod-secrets-4dde50fd-7b07-49e9-8a93-82a7f3ccd54a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022335623s STEP: Saw pod success Mar 20 21:10:07.799: INFO: Pod "pod-secrets-4dde50fd-7b07-49e9-8a93-82a7f3ccd54a" satisfied condition "success or failure" Mar 20 21:10:07.802: INFO: Trying to get logs from node jerma-worker pod pod-secrets-4dde50fd-7b07-49e9-8a93-82a7f3ccd54a container secret-volume-test: STEP: delete the pod Mar 20 21:10:07.847: INFO: Waiting for pod pod-secrets-4dde50fd-7b07-49e9-8a93-82a7f3ccd54a to disappear Mar 20 21:10:07.858: INFO: Pod pod-secrets-4dde50fd-7b07-49e9-8a93-82a7f3ccd54a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:10:07.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-290" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":120,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:10:07.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0320 21:10:38.470851 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 20 21:10:38.470: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:10:38.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6141" for this suite. • [SLOW TEST:30.613 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":10,"skipped":123,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:10:38.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 21:10:39.029: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 21:10:41.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335439, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335439, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335439, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720335438, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 21:10:44.077: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 20 21:10:48.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-8543 to-be-attached-pod -i -c=container1' Mar 20 21:10:50.408: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:10:50.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8543" for this suite. STEP: Destroying namespace "webhook-8543-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.996 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":11,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:10:50.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 20 21:10:50.577: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:10:50.601: INFO: Number of nodes with available pods: 0 Mar 20 21:10:50.601: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:10:51.719: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:10:51.745: INFO: Number of nodes with available pods: 0 Mar 20 21:10:51.745: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:10:52.605: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:10:52.608: INFO: Number of nodes with available pods: 0 Mar 20 21:10:52.608: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:10:53.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:10:53.637: INFO: Number of nodes with available pods: 0 Mar 20 21:10:53.637: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:10:54.605: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:10:54.609: INFO: Number of nodes with available pods: 2 Mar 20 21:10:54.609: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 20 21:10:54.626: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:10:54.631: INFO: Number of nodes with available pods: 2 Mar 20 21:10:54.631: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4533, will wait for the garbage collector to delete the pods Mar 20 21:10:55.945: INFO: Deleting DaemonSet.extensions daemon-set took: 35.854375ms Mar 20 21:10:56.045: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.281806ms Mar 20 21:10:59.649: INFO: Number of nodes with available pods: 0 Mar 20 21:10:59.649: INFO: Number of running nodes: 0, number of available pods: 0 Mar 20 21:10:59.652: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4533/daemonsets","resourceVersion":"1375173"},"items":null} Mar 20 21:10:59.669: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4533/pods","resourceVersion":"1375173"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:10:59.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4533" for this suite. • [SLOW TEST:9.213 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":12,"skipped":173,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:10:59.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 20 21:10:59.742: INFO: namespace kubectl-5693 Mar 20 21:10:59.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5693' Mar 20 21:11:00.120: INFO: stderr: "" Mar 20 21:11:00.120: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 20 21:11:01.125: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 21:11:01.125: INFO: Found 0 / 1 Mar 20 21:11:02.154: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 21:11:02.154: INFO: Found 0 / 1 Mar 20 21:11:03.126: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 21:11:03.126: INFO: Found 1 / 1 Mar 20 21:11:03.126: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 20 21:11:03.129: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 21:11:03.129: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 20 21:11:03.129: INFO: wait on agnhost-master startup in kubectl-5693 Mar 20 21:11:03.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-8j8jj agnhost-master --namespace=kubectl-5693' Mar 20 21:11:03.250: INFO: stderr: "" Mar 20 21:11:03.250: INFO: stdout: "Paused\n" STEP: exposing RC Mar 20 21:11:03.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5693' Mar 20 21:11:03.399: INFO: stderr: "" Mar 20 21:11:03.399: INFO: stdout: "service/rm2 exposed\n" Mar 20 21:11:03.405: INFO: Service rm2 in namespace kubectl-5693 found. STEP: exposing service Mar 20 21:11:05.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5693' Mar 20 21:11:05.553: INFO: stderr: "" Mar 20 21:11:05.553: INFO: stdout: "service/rm3 exposed\n" Mar 20 21:11:05.559: INFO: Service rm3 in namespace kubectl-5693 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:11:07.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5693" for this suite. • [SLOW TEST:7.888 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1295 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":13,"skipped":175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:11:07.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-4b23d98a-aa42-446f-829e-e6954cde967f in namespace container-probe-2492 Mar 20 21:11:11.690: INFO: Started pod liveness-4b23d98a-aa42-446f-829e-e6954cde967f in namespace container-probe-2492 STEP: checking the pod's current state and verifying that restartCount is present Mar 20 21:11:11.693: INFO: Initial restart count of pod liveness-4b23d98a-aa42-446f-829e-e6954cde967f is 0 Mar 20 21:11:25.731: INFO: Restart count of pod container-probe-2492/liveness-4b23d98a-aa42-446f-829e-e6954cde967f is now 1 (14.037516598s elapsed) Mar 20 21:11:45.772: INFO: Restart count of pod container-probe-2492/liveness-4b23d98a-aa42-446f-829e-e6954cde967f is now 2 (34.079092058s elapsed) Mar 20 21:12:05.816: INFO: Restart count of pod container-probe-2492/liveness-4b23d98a-aa42-446f-829e-e6954cde967f is now 3 (54.122259828s elapsed) Mar 20 21:12:25.856: INFO: Restart count of pod container-probe-2492/liveness-4b23d98a-aa42-446f-829e-e6954cde967f is now 4 (1m14.163016473s elapsed) Mar 20 21:13:29.989: INFO: Restart count of pod container-probe-2492/liveness-4b23d98a-aa42-446f-829e-e6954cde967f is now 5 (2m18.295370335s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:13:30.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2492" for this suite. • [SLOW TEST:142.454 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":208,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:13:30.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 20 21:13:30.116: INFO: Waiting up to 5m0s for pod "pod-16d4ee17-628d-49c2-a9a5-da05070365a3" in namespace "emptydir-3546" to be "success or failure" Mar 20 21:13:30.180: INFO: Pod "pod-16d4ee17-628d-49c2-a9a5-da05070365a3": Phase="Pending", Reason="", readiness=false. Elapsed: 63.620704ms Mar 20 21:13:32.198: INFO: Pod "pod-16d4ee17-628d-49c2-a9a5-da05070365a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081220547s Mar 20 21:13:34.201: INFO: Pod "pod-16d4ee17-628d-49c2-a9a5-da05070365a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084367533s STEP: Saw pod success Mar 20 21:13:34.201: INFO: Pod "pod-16d4ee17-628d-49c2-a9a5-da05070365a3" satisfied condition "success or failure" Mar 20 21:13:34.203: INFO: Trying to get logs from node jerma-worker2 pod pod-16d4ee17-628d-49c2-a9a5-da05070365a3 container test-container: STEP: delete the pod Mar 20 21:13:34.235: INFO: Waiting for pod pod-16d4ee17-628d-49c2-a9a5-da05070365a3 to disappear Mar 20 21:13:34.239: INFO: Pod pod-16d4ee17-628d-49c2-a9a5-da05070365a3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:13:34.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3546" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":214,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:13:34.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 21:13:34.307: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a9e6907-2dc8-44c7-a9b9-3c9ae911794d" in namespace "projected-3727" to be "success or failure" Mar 20 21:13:34.322: INFO: Pod "downwardapi-volume-1a9e6907-2dc8-44c7-a9b9-3c9ae911794d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.758439ms Mar 20 21:13:36.326: INFO: Pod "downwardapi-volume-1a9e6907-2dc8-44c7-a9b9-3c9ae911794d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019123469s Mar 20 21:13:38.331: INFO: Pod "downwardapi-volume-1a9e6907-2dc8-44c7-a9b9-3c9ae911794d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023530321s STEP: Saw pod success Mar 20 21:13:38.331: INFO: Pod "downwardapi-volume-1a9e6907-2dc8-44c7-a9b9-3c9ae911794d" satisfied condition "success or failure" Mar 20 21:13:38.334: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1a9e6907-2dc8-44c7-a9b9-3c9ae911794d container client-container: STEP: delete the pod Mar 20 21:13:38.377: INFO: Waiting for pod downwardapi-volume-1a9e6907-2dc8-44c7-a9b9-3c9ae911794d to disappear Mar 20 21:13:38.389: INFO: Pod downwardapi-volume-1a9e6907-2dc8-44c7-a9b9-3c9ae911794d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:13:38.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3727" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":220,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:13:38.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-fff0c65d-0fc9-4737-a857-4f87cffccfac STEP: Creating a pod to test consume secrets Mar 20 21:13:38.470: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ae730220-9771-472d-8325-9fc386146777" in namespace "projected-7390" to be "success or failure" Mar 20 21:13:38.479: INFO: Pod "pod-projected-secrets-ae730220-9771-472d-8325-9fc386146777": Phase="Pending", Reason="", readiness=false. Elapsed: 8.368879ms Mar 20 21:13:40.483: INFO: Pod "pod-projected-secrets-ae730220-9771-472d-8325-9fc386146777": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012213792s Mar 20 21:13:42.487: INFO: Pod "pod-projected-secrets-ae730220-9771-472d-8325-9fc386146777": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016252022s STEP: Saw pod success Mar 20 21:13:42.487: INFO: Pod "pod-projected-secrets-ae730220-9771-472d-8325-9fc386146777" satisfied condition "success or failure" Mar 20 21:13:42.490: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-ae730220-9771-472d-8325-9fc386146777 container projected-secret-volume-test: STEP: delete the pod Mar 20 21:13:42.510: INFO: Waiting for pod pod-projected-secrets-ae730220-9771-472d-8325-9fc386146777 to disappear Mar 20 21:13:42.515: INFO: Pod pod-projected-secrets-ae730220-9771-472d-8325-9fc386146777 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:13:42.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7390" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":233,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:13:42.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 20 21:13:46.640: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 20 21:14:01.733: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:14:01.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2525" for this suite. • [SLOW TEST:19.221 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":18,"skipped":251,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:14:01.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 20 21:14:01.842: INFO: Waiting up to 5m0s for pod "pod-3ba5889e-d532-4816-b9b0-9fed58f29390" in namespace "emptydir-3923" to be "success or failure" Mar 20 21:14:01.845: INFO: Pod "pod-3ba5889e-d532-4816-b9b0-9fed58f29390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.514485ms Mar 20 21:14:03.849: INFO: Pod "pod-3ba5889e-d532-4816-b9b0-9fed58f29390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006846982s Mar 20 21:14:05.852: INFO: Pod "pod-3ba5889e-d532-4816-b9b0-9fed58f29390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010219475s STEP: Saw pod success Mar 20 21:14:05.852: INFO: Pod "pod-3ba5889e-d532-4816-b9b0-9fed58f29390" satisfied condition "success or failure" Mar 20 21:14:05.854: INFO: Trying to get logs from node jerma-worker2 pod pod-3ba5889e-d532-4816-b9b0-9fed58f29390 container test-container: STEP: delete the pod Mar 20 21:14:05.870: INFO: Waiting for pod pod-3ba5889e-d532-4816-b9b0-9fed58f29390 to disappear Mar 20 21:14:05.881: INFO: Pod pod-3ba5889e-d532-4816-b9b0-9fed58f29390 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:14:05.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3923" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":252,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:14:05.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 20 21:14:10.010: INFO: &Pod{ObjectMeta:{send-events-f650a24e-7b31-4c70-b0a7-7563c6865670 events-282 /api/v1/namespaces/events-282/pods/send-events-f650a24e-7b31-4c70-b0a7-7563c6865670 cf8bd012-966e-4a81-ad5d-e5f9b73285d6 1375973 0 2020-03-20 21:14:05 +0000 UTC map[name:foo time:971869763] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v2xtw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v2xtw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v2xtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:14:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:14:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:14:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:14:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.148,StartTime:2020-03-20 21:14:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-20 21:14:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://b72e00497ff7324642d853cfed3e683a3193f4fde14be70c61729e21f515c953,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.148,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 20 21:14:12.014: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 20 21:14:14.019: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:14:14.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-282" for this suite. • [SLOW TEST:8.150 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":20,"skipped":261,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:14:14.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9756 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 20 21:14:14.128: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 20 21:14:40.266: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.149:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9756 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 21:14:40.266: INFO: >>> kubeConfig: /root/.kube/config I0320 21:14:40.300622 7 log.go:172] (0xc001b58f20) (0xc001eb8320) Create stream I0320 21:14:40.300660 7 log.go:172] (0xc001b58f20) (0xc001eb8320) Stream added, broadcasting: 1 I0320 21:14:40.306692 7 log.go:172] (0xc001b58f20) Reply frame received for 1 I0320 21:14:40.306771 7 log.go:172] (0xc001b58f20) (0xc002951b80) Create stream I0320 21:14:40.306799 7 log.go:172] (0xc001b58f20) (0xc002951b80) Stream added, broadcasting: 3 I0320 21:14:40.308945 7 log.go:172] (0xc001b58f20) Reply frame received for 3 I0320 21:14:40.308974 7 log.go:172] (0xc001b58f20) (0xc001eb83c0) Create stream I0320 21:14:40.308988 7 log.go:172] (0xc001b58f20) (0xc001eb83c0) Stream added, broadcasting: 5 I0320 21:14:40.310078 7 log.go:172] (0xc001b58f20) Reply frame received for 5 I0320 21:14:40.396609 7 log.go:172] (0xc001b58f20) Data frame received for 3 I0320 21:14:40.396712 7 log.go:172] (0xc002951b80) (3) Data frame handling I0320 21:14:40.396757 7 log.go:172] (0xc002951b80) (3) Data frame sent I0320 21:14:40.396910 7 log.go:172] (0xc001b58f20) Data frame received for 3 I0320 21:14:40.396938 7 log.go:172] (0xc002951b80) (3) Data frame handling I0320 21:14:40.396968 7 log.go:172] (0xc001b58f20) Data frame received for 5 I0320 21:14:40.396996 7 log.go:172] (0xc001eb83c0) (5) Data frame handling I0320 21:14:40.398709 7 log.go:172] (0xc001b58f20) Data frame received for 1 I0320 21:14:40.398744 7 log.go:172] (0xc001eb8320) (1) Data frame handling I0320 21:14:40.398782 7 log.go:172] (0xc001eb8320) (1) Data frame sent I0320 21:14:40.398810 7 log.go:172] (0xc001b58f20) (0xc001eb8320) Stream removed, broadcasting: 1 I0320 21:14:40.398840 7 log.go:172] (0xc001b58f20) Go away received I0320 21:14:40.399072 7 log.go:172] (0xc001b58f20) (0xc001eb8320) Stream removed, broadcasting: 1 I0320 21:14:40.399097 7 log.go:172] (0xc001b58f20) (0xc002951b80) Stream removed, broadcasting: 3 I0320 21:14:40.399108 7 log.go:172] (0xc001b58f20) (0xc001eb83c0) Stream removed, broadcasting: 5 Mar 20 21:14:40.399: INFO: Found all expected endpoints: [netserver-0] Mar 20 21:14:40.402: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.163:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9756 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 21:14:40.402: INFO: >>> kubeConfig: /root/.kube/config I0320 21:14:40.449318 7 log.go:172] (0xc002a19080) (0xc002951f40) Create stream I0320 21:14:40.449370 7 log.go:172] (0xc002a19080) (0xc002951f40) Stream added, broadcasting: 1 I0320 21:14:40.452447 7 log.go:172] (0xc002a19080) Reply frame received for 1 I0320 21:14:40.452504 7 log.go:172] (0xc002a19080) (0xc001be4000) Create stream I0320 21:14:40.452525 7 log.go:172] (0xc002a19080) (0xc001be4000) Stream added, broadcasting: 3 I0320 21:14:40.453859 7 log.go:172] (0xc002a19080) Reply frame received for 3 I0320 21:14:40.453905 7 log.go:172] (0xc002a19080) (0xc001be40a0) Create stream I0320 21:14:40.453918 7 log.go:172] (0xc002a19080) (0xc001be40a0) Stream added, broadcasting: 5 I0320 21:14:40.455031 7 log.go:172] (0xc002a19080) Reply frame received for 5 I0320 21:14:40.523108 7 log.go:172] (0xc002a19080) Data frame received for 5 I0320 21:14:40.523142 7 log.go:172] (0xc002a19080) Data frame received for 3 I0320 21:14:40.523160 7 log.go:172] (0xc001be4000) (3) Data frame handling I0320 21:14:40.523168 7 log.go:172] (0xc001be4000) (3) Data frame sent I0320 21:14:40.523176 7 log.go:172] (0xc002a19080) Data frame received for 3 I0320 21:14:40.523182 7 log.go:172] (0xc001be4000) (3) Data frame handling I0320 21:14:40.523216 7 log.go:172] (0xc001be40a0) (5) Data frame handling I0320 21:14:40.524968 7 log.go:172] (0xc002a19080) Data frame received for 1 I0320 21:14:40.524998 7 log.go:172] (0xc002951f40) (1) Data frame handling I0320 21:14:40.525018 7 log.go:172] (0xc002951f40) (1) Data frame sent I0320 21:14:40.525042 7 log.go:172] (0xc002a19080) (0xc002951f40) Stream removed, broadcasting: 1 I0320 21:14:40.525071 7 log.go:172] (0xc002a19080) Go away received I0320 21:14:40.525485 7 log.go:172] (0xc002a19080) (0xc002951f40) Stream removed, broadcasting: 1 I0320 21:14:40.525513 7 log.go:172] (0xc002a19080) (0xc001be4000) Stream removed, broadcasting: 3 I0320 21:14:40.525545 7 log.go:172] (0xc002a19080) (0xc001be40a0) Stream removed, broadcasting: 5 Mar 20 21:14:40.525: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:14:40.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9756" for this suite. • [SLOW TEST:26.494 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":262,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:14:40.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1861 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 20 21:14:40.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7558' Mar 20 21:14:40.740: INFO: stderr: "" Mar 20 21:14:40.740: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1866 Mar 20 21:14:40.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7558' Mar 20 21:14:49.480: INFO: stderr: "" Mar 20 21:14:49.480: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:14:49.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7558" for this suite. • [SLOW TEST:8.967 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1857 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":22,"skipped":266,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:14:49.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:14:49.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1251" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":23,"skipped":270,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:14:49.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-af903d49-1e72-4209-b143-34227e3c1ba5 STEP: Creating a pod to test consume configMaps Mar 20 21:14:49.680: INFO: Waiting up to 5m0s for pod "pod-configmaps-2bacaf1e-a7da-4ffe-8af3-12a06ca1d8e3" in namespace "configmap-4477" to be "success or failure" Mar 20 21:14:49.699: INFO: Pod "pod-configmaps-2bacaf1e-a7da-4ffe-8af3-12a06ca1d8e3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.212514ms Mar 20 21:14:51.702: INFO: Pod "pod-configmaps-2bacaf1e-a7da-4ffe-8af3-12a06ca1d8e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021230448s Mar 20 21:14:53.705: INFO: Pod "pod-configmaps-2bacaf1e-a7da-4ffe-8af3-12a06ca1d8e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024978042s STEP: Saw pod success Mar 20 21:14:53.705: INFO: Pod "pod-configmaps-2bacaf1e-a7da-4ffe-8af3-12a06ca1d8e3" satisfied condition "success or failure" Mar 20 21:14:53.707: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-2bacaf1e-a7da-4ffe-8af3-12a06ca1d8e3 container configmap-volume-test: STEP: delete the pod Mar 20 21:14:53.737: INFO: Waiting for pod pod-configmaps-2bacaf1e-a7da-4ffe-8af3-12a06ca1d8e3 to disappear Mar 20 21:14:53.750: INFO: Pod pod-configmaps-2bacaf1e-a7da-4ffe-8af3-12a06ca1d8e3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:14:53.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4477" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:14:53.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-14fc81e7-3ddb-4981-90dc-c6585338dff9 STEP: Creating a pod to test consume configMaps Mar 20 21:14:53.847: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e1d8a4ed-ede3-4c48-9a29-61e746f0e5fe" in namespace "projected-9010" to be "success or failure" Mar 20 21:14:53.851: INFO: Pod "pod-projected-configmaps-e1d8a4ed-ede3-4c48-9a29-61e746f0e5fe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.793467ms Mar 20 21:14:55.860: INFO: Pod "pod-projected-configmaps-e1d8a4ed-ede3-4c48-9a29-61e746f0e5fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012378696s Mar 20 21:14:57.864: INFO: Pod "pod-projected-configmaps-e1d8a4ed-ede3-4c48-9a29-61e746f0e5fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0169167s STEP: Saw pod success Mar 20 21:14:57.865: INFO: Pod "pod-projected-configmaps-e1d8a4ed-ede3-4c48-9a29-61e746f0e5fe" satisfied condition "success or failure" Mar 20 21:14:57.868: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-e1d8a4ed-ede3-4c48-9a29-61e746f0e5fe container projected-configmap-volume-test: STEP: delete the pod Mar 20 21:14:57.918: INFO: Waiting for pod pod-projected-configmaps-e1d8a4ed-ede3-4c48-9a29-61e746f0e5fe to disappear Mar 20 21:14:57.923: INFO: Pod pod-projected-configmaps-e1d8a4ed-ede3-4c48-9a29-61e746f0e5fe no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:14:57.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9010" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:14:57.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:15:09.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3787" for this suite. • [SLOW TEST:11.105 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":26,"skipped":373,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:15:09.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 20 21:15:09.130: INFO: Waiting up to 5m0s for pod "pod-c31ff465-76bd-4391-83e6-7f41e967c595" in namespace "emptydir-8895" to be "success or failure" Mar 20 21:15:09.139: INFO: Pod "pod-c31ff465-76bd-4391-83e6-7f41e967c595": Phase="Pending", Reason="", readiness=false. Elapsed: 8.75867ms Mar 20 21:15:11.218: INFO: Pod "pod-c31ff465-76bd-4391-83e6-7f41e967c595": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087714816s Mar 20 21:15:13.222: INFO: Pod "pod-c31ff465-76bd-4391-83e6-7f41e967c595": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091420099s STEP: Saw pod success Mar 20 21:15:13.222: INFO: Pod "pod-c31ff465-76bd-4391-83e6-7f41e967c595" satisfied condition "success or failure" Mar 20 21:15:13.224: INFO: Trying to get logs from node jerma-worker pod pod-c31ff465-76bd-4391-83e6-7f41e967c595 container test-container: STEP: delete the pod Mar 20 21:15:13.344: INFO: Waiting for pod pod-c31ff465-76bd-4391-83e6-7f41e967c595 to disappear Mar 20 21:15:13.355: INFO: Pod pod-c31ff465-76bd-4391-83e6-7f41e967c595 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:15:13.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8895" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:15:13.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:15:13.458: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:15:17.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2896" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":406,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:15:17.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-58817bd1-9ab8-44ac-9c4c-c5b53b5d9915 STEP: Creating configMap with name cm-test-opt-upd-95b689e4-d5a6-4f6a-a19c-1923323a5be7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-58817bd1-9ab8-44ac-9c4c-c5b53b5d9915 STEP: Updating configmap cm-test-opt-upd-95b689e4-d5a6-4f6a-a19c-1923323a5be7 STEP: Creating configMap with name cm-test-opt-create-ce46d59e-9a43-4d9e-a8e8-4e487aeba4af STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:16:36.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3867" for this suite. • [SLOW TEST:78.519 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":417,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:16:36.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-4aad757c-963b-4952-b010-7ef54b86d019 STEP: Creating a pod to test consume secrets Mar 20 21:16:36.148: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ea7eca39-bf2b-4c33-9ae4-c46586f8cc40" in namespace "projected-6553" to be "success or failure" Mar 20 21:16:36.160: INFO: Pod "pod-projected-secrets-ea7eca39-bf2b-4c33-9ae4-c46586f8cc40": Phase="Pending", Reason="", readiness=false. Elapsed: 11.816074ms Mar 20 21:16:38.164: INFO: Pod "pod-projected-secrets-ea7eca39-bf2b-4c33-9ae4-c46586f8cc40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015445572s Mar 20 21:16:40.168: INFO: Pod "pod-projected-secrets-ea7eca39-bf2b-4c33-9ae4-c46586f8cc40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019635972s STEP: Saw pod success Mar 20 21:16:40.168: INFO: Pod "pod-projected-secrets-ea7eca39-bf2b-4c33-9ae4-c46586f8cc40" satisfied condition "success or failure" Mar 20 21:16:40.171: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-ea7eca39-bf2b-4c33-9ae4-c46586f8cc40 container projected-secret-volume-test: STEP: delete the pod Mar 20 21:16:40.203: INFO: Waiting for pod pod-projected-secrets-ea7eca39-bf2b-4c33-9ae4-c46586f8cc40 to disappear Mar 20 21:16:40.219: INFO: Pod pod-projected-secrets-ea7eca39-bf2b-4c33-9ae4-c46586f8cc40 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:16:40.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6553" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:16:40.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:16:40.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5528" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":31,"skipped":460,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:16:40.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:16:53.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5349" for this suite. • [SLOW TEST:13.211 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":32,"skipped":468,"failed":0} [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:16:53.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:16:57.727: INFO: Waiting up to 5m0s for pod "client-envvars-2b69328f-b771-4621-ad7a-40fdffc1e5f3" in namespace "pods-4882" to be "success or failure" Mar 20 21:16:57.747: INFO: Pod "client-envvars-2b69328f-b771-4621-ad7a-40fdffc1e5f3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.388879ms Mar 20 21:16:59.751: INFO: Pod "client-envvars-2b69328f-b771-4621-ad7a-40fdffc1e5f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023379455s Mar 20 21:17:01.755: INFO: Pod "client-envvars-2b69328f-b771-4621-ad7a-40fdffc1e5f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027734445s STEP: Saw pod success Mar 20 21:17:01.755: INFO: Pod "client-envvars-2b69328f-b771-4621-ad7a-40fdffc1e5f3" satisfied condition "success or failure" Mar 20 21:17:01.758: INFO: Trying to get logs from node jerma-worker pod client-envvars-2b69328f-b771-4621-ad7a-40fdffc1e5f3 container env3cont: STEP: delete the pod Mar 20 21:17:01.778: INFO: Waiting for pod client-envvars-2b69328f-b771-4621-ad7a-40fdffc1e5f3 to disappear Mar 20 21:17:01.782: INFO: Pod client-envvars-2b69328f-b771-4621-ad7a-40fdffc1e5f3 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:17:01.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4882" for this suite. • [SLOW TEST:8.202 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":468,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:17:01.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 21:17:01.855: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70b19273-6f0b-4777-bde5-b2d672ad8f43" in namespace "projected-6944" to be "success or failure" Mar 20 21:17:01.872: INFO: Pod "downwardapi-volume-70b19273-6f0b-4777-bde5-b2d672ad8f43": Phase="Pending", Reason="", readiness=false. Elapsed: 16.216369ms Mar 20 21:17:03.890: INFO: Pod "downwardapi-volume-70b19273-6f0b-4777-bde5-b2d672ad8f43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034612294s Mar 20 21:17:05.902: INFO: Pod "downwardapi-volume-70b19273-6f0b-4777-bde5-b2d672ad8f43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046607677s STEP: Saw pod success Mar 20 21:17:05.902: INFO: Pod "downwardapi-volume-70b19273-6f0b-4777-bde5-b2d672ad8f43" satisfied condition "success or failure" Mar 20 21:17:05.905: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-70b19273-6f0b-4777-bde5-b2d672ad8f43 container client-container: STEP: delete the pod Mar 20 21:17:05.933: INFO: Waiting for pod downwardapi-volume-70b19273-6f0b-4777-bde5-b2d672ad8f43 to disappear Mar 20 21:17:05.943: INFO: Pod downwardapi-volume-70b19273-6f0b-4777-bde5-b2d672ad8f43 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:17:05.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6944" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":494,"failed":0} ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:17:05.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 20 21:17:06.018: INFO: Waiting up to 5m0s for pod "client-containers-6635b144-2728-4a78-848d-114b517659c2" in namespace "containers-5076" to be "success or failure" Mar 20 21:17:06.022: INFO: Pod "client-containers-6635b144-2728-4a78-848d-114b517659c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.600903ms Mar 20 21:17:08.046: INFO: Pod "client-containers-6635b144-2728-4a78-848d-114b517659c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028264726s Mar 20 21:17:10.051: INFO: Pod "client-containers-6635b144-2728-4a78-848d-114b517659c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032656336s STEP: Saw pod success Mar 20 21:17:10.051: INFO: Pod "client-containers-6635b144-2728-4a78-848d-114b517659c2" satisfied condition "success or failure" Mar 20 21:17:10.054: INFO: Trying to get logs from node jerma-worker pod client-containers-6635b144-2728-4a78-848d-114b517659c2 container test-container: STEP: delete the pod Mar 20 21:17:10.084: INFO: Waiting for pod client-containers-6635b144-2728-4a78-848d-114b517659c2 to disappear Mar 20 21:17:10.100: INFO: Pod client-containers-6635b144-2728-4a78-848d-114b517659c2 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:17:10.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5076" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":494,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:17:10.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:17:10.222: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-34c36417-5608-49a2-a92d-b648ca41d39b" in namespace "security-context-test-8225" to be "success or failure" Mar 20 21:17:10.226: INFO: Pod "busybox-readonly-false-34c36417-5608-49a2-a92d-b648ca41d39b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.848169ms Mar 20 21:17:12.230: INFO: Pod "busybox-readonly-false-34c36417-5608-49a2-a92d-b648ca41d39b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007736109s Mar 20 21:17:14.235: INFO: Pod "busybox-readonly-false-34c36417-5608-49a2-a92d-b648ca41d39b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012281952s Mar 20 21:17:14.235: INFO: Pod "busybox-readonly-false-34c36417-5608-49a2-a92d-b648ca41d39b" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:17:14.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8225" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":505,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:17:14.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6547 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6547 I0320 21:17:14.417569 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6547, replica count: 2 I0320 21:17:17.468087 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0320 21:17:20.468393 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 20 21:17:20.468: INFO: Creating new exec pod Mar 20 21:17:25.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6547 execpodqm8v8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 20 21:17:25.718: INFO: stderr: "I0320 21:17:25.648475 214 log.go:172] (0xc000854a50) (0xc0008720a0) Create stream\nI0320 21:17:25.648544 214 log.go:172] (0xc000854a50) (0xc0008720a0) Stream added, broadcasting: 1\nI0320 21:17:25.652189 214 log.go:172] (0xc000854a50) Reply frame received for 1\nI0320 21:17:25.652237 214 log.go:172] (0xc000854a50) (0xc0005e7ae0) Create stream\nI0320 21:17:25.652344 214 log.go:172] (0xc000854a50) (0xc0005e7ae0) Stream added, broadcasting: 3\nI0320 21:17:25.653615 214 log.go:172] (0xc000854a50) Reply frame received for 3\nI0320 21:17:25.653675 214 log.go:172] (0xc000854a50) (0xc000422000) Create stream\nI0320 21:17:25.653693 214 log.go:172] (0xc000854a50) (0xc000422000) Stream added, broadcasting: 5\nI0320 21:17:25.654722 214 log.go:172] (0xc000854a50) Reply frame received for 5\nI0320 21:17:25.710907 214 log.go:172] (0xc000854a50) Data frame received for 5\nI0320 21:17:25.710937 214 log.go:172] (0xc000422000) (5) Data frame handling\nI0320 21:17:25.710954 214 log.go:172] (0xc000422000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0320 21:17:25.711589 214 log.go:172] (0xc000854a50) Data frame received for 5\nI0320 21:17:25.711611 214 log.go:172] (0xc000422000) (5) Data frame handling\nI0320 21:17:25.711629 214 log.go:172] (0xc000422000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0320 21:17:25.711977 214 log.go:172] (0xc000854a50) Data frame received for 5\nI0320 21:17:25.712094 214 log.go:172] (0xc000422000) (5) Data frame handling\nI0320 21:17:25.712133 214 log.go:172] (0xc000854a50) Data frame received for 3\nI0320 21:17:25.712146 214 log.go:172] (0xc0005e7ae0) (3) Data frame handling\nI0320 21:17:25.714019 214 log.go:172] (0xc000854a50) Data frame received for 1\nI0320 21:17:25.714037 214 log.go:172] (0xc0008720a0) (1) Data frame handling\nI0320 21:17:25.714047 214 log.go:172] (0xc0008720a0) (1) Data frame sent\nI0320 21:17:25.714060 214 log.go:172] (0xc000854a50) (0xc0008720a0) Stream removed, broadcasting: 1\nI0320 21:17:25.714223 214 log.go:172] (0xc000854a50) Go away received\nI0320 21:17:25.714338 214 log.go:172] (0xc000854a50) (0xc0008720a0) Stream removed, broadcasting: 1\nI0320 21:17:25.714358 214 log.go:172] (0xc000854a50) (0xc0005e7ae0) Stream removed, broadcasting: 3\nI0320 21:17:25.714369 214 log.go:172] (0xc000854a50) (0xc000422000) Stream removed, broadcasting: 5\n" Mar 20 21:17:25.718: INFO: stdout: "" Mar 20 21:17:25.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6547 execpodqm8v8 -- /bin/sh -x -c nc -zv -t -w 2 10.104.208.103 80' Mar 20 21:17:25.922: INFO: stderr: "I0320 21:17:25.844421 236 log.go:172] (0xc0009c0790) (0xc000b2a000) Create stream\nI0320 21:17:25.844475 236 log.go:172] (0xc0009c0790) (0xc000b2a000) Stream added, broadcasting: 1\nI0320 21:17:25.848366 236 log.go:172] (0xc0009c0790) Reply frame received for 1\nI0320 21:17:25.848406 236 log.go:172] (0xc0009c0790) (0xc00065dae0) Create stream\nI0320 21:17:25.848423 236 log.go:172] (0xc0009c0790) (0xc00065dae0) Stream added, broadcasting: 3\nI0320 21:17:25.849540 236 log.go:172] (0xc0009c0790) Reply frame received for 3\nI0320 21:17:25.849582 236 log.go:172] (0xc0009c0790) (0xc00065dcc0) Create stream\nI0320 21:17:25.849597 236 log.go:172] (0xc0009c0790) (0xc00065dcc0) Stream added, broadcasting: 5\nI0320 21:17:25.850473 236 log.go:172] (0xc0009c0790) Reply frame received for 5\nI0320 21:17:25.915224 236 log.go:172] (0xc0009c0790) Data frame received for 3\nI0320 21:17:25.915269 236 log.go:172] (0xc00065dae0) (3) Data frame handling\nI0320 21:17:25.915295 236 log.go:172] (0xc0009c0790) Data frame received for 5\nI0320 21:17:25.915306 236 log.go:172] (0xc00065dcc0) (5) Data frame handling\nI0320 21:17:25.915318 236 log.go:172] (0xc00065dcc0) (5) Data frame sent\nI0320 21:17:25.915336 236 log.go:172] (0xc0009c0790) Data frame received for 5\nI0320 21:17:25.915353 236 log.go:172] (0xc00065dcc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.208.103 80\nConnection to 10.104.208.103 80 port [tcp/http] succeeded!\nI0320 21:17:25.916801 236 log.go:172] (0xc0009c0790) Data frame received for 1\nI0320 21:17:25.916823 236 log.go:172] (0xc000b2a000) (1) Data frame handling\nI0320 21:17:25.916838 236 log.go:172] (0xc000b2a000) (1) Data frame sent\nI0320 21:17:25.916850 236 log.go:172] (0xc0009c0790) (0xc000b2a000) Stream removed, broadcasting: 1\nI0320 21:17:25.916868 236 log.go:172] (0xc0009c0790) Go away received\nI0320 21:17:25.917602 236 log.go:172] (0xc0009c0790) (0xc000b2a000) Stream removed, broadcasting: 1\nI0320 21:17:25.917627 236 log.go:172] (0xc0009c0790) (0xc00065dae0) Stream removed, broadcasting: 3\nI0320 21:17:25.917644 236 log.go:172] (0xc0009c0790) (0xc00065dcc0) Stream removed, broadcasting: 5\n" Mar 20 21:17:25.922: INFO: stdout: "" Mar 20 21:17:25.922: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:17:25.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6547" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.745 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":37,"skipped":508,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:17:25.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 20 21:17:26.081: INFO: >>> kubeConfig: /root/.kube/config Mar 20 21:17:28.015: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:17:38.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2857" for this suite. • [SLOW TEST:12.576 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":38,"skipped":513,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:17:38.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0320 21:17:48.635238 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 20 21:17:48.635: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:17:48.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8474" for this suite. • [SLOW TEST:10.078 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":39,"skipped":524,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:17:48.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:18:08.723: INFO: Container started at 2020-03-20 21:17:50 +0000 UTC, pod became ready at 2020-03-20 21:18:07 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:18:08.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7114" for this suite. • [SLOW TEST:20.089 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:18:08.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 20 21:18:11.810: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:18:11.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9526" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":598,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:18:11.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 20 21:18:11.906: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2666 /api/v1/namespaces/watch-2666/configmaps/e2e-watch-test-configmap-a 61dcc083-e06d-4d09-9315-50fd9b084e50 1377323 0 2020-03-20 21:18:11 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 20 21:18:11.907: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2666 /api/v1/namespaces/watch-2666/configmaps/e2e-watch-test-configmap-a 61dcc083-e06d-4d09-9315-50fd9b084e50 1377323 0 2020-03-20 21:18:11 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 20 21:18:21.915: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2666 /api/v1/namespaces/watch-2666/configmaps/e2e-watch-test-configmap-a 61dcc083-e06d-4d09-9315-50fd9b084e50 1377372 0 2020-03-20 21:18:11 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 20 21:18:21.915: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2666 /api/v1/namespaces/watch-2666/configmaps/e2e-watch-test-configmap-a 61dcc083-e06d-4d09-9315-50fd9b084e50 1377372 0 2020-03-20 21:18:11 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 20 21:18:31.923: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2666 /api/v1/namespaces/watch-2666/configmaps/e2e-watch-test-configmap-a 61dcc083-e06d-4d09-9315-50fd9b084e50 1377402 0 2020-03-20 21:18:11 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 20 21:18:31.923: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2666 /api/v1/namespaces/watch-2666/configmaps/e2e-watch-test-configmap-a 61dcc083-e06d-4d09-9315-50fd9b084e50 1377402 0 2020-03-20 21:18:11 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 20 21:18:41.931: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2666 /api/v1/namespaces/watch-2666/configmaps/e2e-watch-test-configmap-a 61dcc083-e06d-4d09-9315-50fd9b084e50 1377434 0 2020-03-20 21:18:11 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 20 21:18:41.931: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2666 /api/v1/namespaces/watch-2666/configmaps/e2e-watch-test-configmap-a 61dcc083-e06d-4d09-9315-50fd9b084e50 1377434 0 2020-03-20 21:18:11 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 20 21:18:51.939: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2666 /api/v1/namespaces/watch-2666/configmaps/e2e-watch-test-configmap-b 55c8c135-0503-4b31-ace6-b489ffb5742b 1377464 0 2020-03-20 21:18:51 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 20 21:18:51.939: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2666 /api/v1/namespaces/watch-2666/configmaps/e2e-watch-test-configmap-b 55c8c135-0503-4b31-ace6-b489ffb5742b 1377464 0 2020-03-20 21:18:51 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 20 21:19:01.945: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2666 /api/v1/namespaces/watch-2666/configmaps/e2e-watch-test-configmap-b 55c8c135-0503-4b31-ace6-b489ffb5742b 1377494 0 2020-03-20 21:18:51 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 20 21:19:01.945: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2666 /api/v1/namespaces/watch-2666/configmaps/e2e-watch-test-configmap-b 55c8c135-0503-4b31-ace6-b489ffb5742b 1377494 0 2020-03-20 21:18:51 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:19:11.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2666" for this suite. • [SLOW TEST:60.124 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":42,"skipped":609,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:19:11.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:19:16.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1096" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":619,"failed":0} S ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:19:16.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3016.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3016.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3016.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3016.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3016.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3016.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 20 21:19:22.203: INFO: DNS probes using dns-3016/dns-test-13eb059c-e0e4-4021-bf1c-1b06ac85eb67 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:19:22.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3016" for this suite. • [SLOW TEST:6.177 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":44,"skipped":620,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:19:22.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 20 21:19:22.332: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 20 21:19:22.668: INFO: Waiting for terminating namespaces to be deleted... Mar 20 21:19:22.671: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 20 21:19:22.687: INFO: busybox-scheduling-1758d1ed-abac-4b9a-97a5-8d70264bfb2e from kubelet-test-1096 started at 2020-03-20 21:19:12 +0000 UTC (1 container statuses recorded) Mar 20 21:19:22.687: INFO: Container busybox-scheduling-1758d1ed-abac-4b9a-97a5-8d70264bfb2e ready: true, restart count 0 Mar 20 21:19:22.687: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:19:22.687: INFO: Container kindnet-cni ready: true, restart count 0 Mar 20 21:19:22.687: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:19:22.687: INFO: Container kube-proxy ready: true, restart count 0 Mar 20 21:19:22.687: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 20 21:19:22.712: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:19:22.712: INFO: Container kindnet-cni ready: true, restart count 0 Mar 20 21:19:22.712: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:19:22.712: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fe1fdf48180b45], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:19:23.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2629" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":45,"skipped":621,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:19:23.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2076 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2076 STEP: Creating statefulset with conflicting port in namespace statefulset-2076 STEP: Waiting until pod test-pod will start running in namespace statefulset-2076 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2076 Mar 20 21:19:29.875: INFO: Observed stateful pod in namespace: statefulset-2076, name: ss-0, uid: f03ef9f2-db9e-4a26-88d6-a359cfa27f3a, status phase: Failed. Waiting for statefulset controller to delete. Mar 20 21:19:29.890: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2076 STEP: Removing pod with conflicting port in namespace statefulset-2076 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2076 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 20 21:19:33.974: INFO: Deleting all statefulset in ns statefulset-2076 Mar 20 21:19:33.977: INFO: Scaling statefulset ss to 0 Mar 20 21:19:44.010: INFO: Waiting for statefulset status.replicas updated to 0 Mar 20 21:19:44.013: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:19:44.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2076" for this suite. • [SLOW TEST:20.292 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":46,"skipped":641,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:19:44.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 20 21:19:44.105: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 20 21:19:44.115: INFO: Waiting for terminating namespaces to be deleted... Mar 20 21:19:44.118: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 20 21:19:44.124: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:19:44.124: INFO: Container kindnet-cni ready: true, restart count 0 Mar 20 21:19:44.124: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:19:44.124: INFO: Container kube-proxy ready: true, restart count 0 Mar 20 21:19:44.124: INFO: busybox-scheduling-1758d1ed-abac-4b9a-97a5-8d70264bfb2e from kubelet-test-1096 started at 2020-03-20 21:19:12 +0000 UTC (1 container statuses recorded) Mar 20 21:19:44.124: INFO: Container busybox-scheduling-1758d1ed-abac-4b9a-97a5-8d70264bfb2e ready: true, restart count 0 Mar 20 21:19:44.124: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 20 21:19:44.130: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:19:44.130: INFO: Container kube-proxy ready: true, restart count 0 Mar 20 21:19:44.130: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:19:44.130: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4ebd118b-65bb-459f-bda5-a67deabd9f1f 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-4ebd118b-65bb-459f-bda5-a67deabd9f1f off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-4ebd118b-65bb-459f-bda5-a67deabd9f1f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:20:00.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5170" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.327 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":47,"skipped":660,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:20:00.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 20 21:20:00.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 20 21:20:00.481: INFO: stderr: "" Mar 20 21:20:00.481: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:20:00.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4780" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":48,"skipped":661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:20:00.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:20:07.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9995" for this suite. • [SLOW TEST:7.103 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":49,"skipped":686,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:20:07.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-ee6d536e-1502-4f7b-986f-e371c6a3a871 STEP: Creating secret with name s-test-opt-upd-9fc523ea-da14-4fee-b24e-082d0283ddf0 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ee6d536e-1502-4f7b-986f-e371c6a3a871 STEP: Updating secret s-test-opt-upd-9fc523ea-da14-4fee-b24e-082d0283ddf0 STEP: Creating secret with name s-test-opt-create-14524105-7f93-4573-82f3-392b3430194a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:21:26.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8812" for this suite. • [SLOW TEST:78.563 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:21:26.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9513 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 20 21:21:26.277: INFO: Found 0 stateful pods, waiting for 3 Mar 20 21:21:36.282: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 20 21:21:36.282: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 20 21:21:36.282: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 20 21:21:36.308: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 20 21:21:46.380: INFO: Updating stateful set ss2 Mar 20 21:21:46.407: INFO: Waiting for Pod statefulset-9513/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 20 21:21:56.505: INFO: Waiting for Pod statefulset-9513/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 20 21:22:06.576: INFO: Found 2 stateful pods, waiting for 3 Mar 20 21:22:16.581: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 20 21:22:16.581: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 20 21:22:16.581: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 20 21:22:16.606: INFO: Updating stateful set ss2 Mar 20 21:22:16.646: INFO: Waiting for Pod statefulset-9513/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 20 21:22:26.654: INFO: Waiting for Pod statefulset-9513/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 20 21:22:36.672: INFO: Updating stateful set ss2 Mar 20 21:22:36.684: INFO: Waiting for StatefulSet statefulset-9513/ss2 to complete update Mar 20 21:22:36.684: INFO: Waiting for Pod statefulset-9513/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 20 21:22:46.692: INFO: Waiting for StatefulSet statefulset-9513/ss2 to complete update Mar 20 21:22:46.692: INFO: Waiting for Pod statefulset-9513/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 20 21:22:56.691: INFO: Deleting all statefulset in ns statefulset-9513 Mar 20 21:22:56.715: INFO: Scaling statefulset ss2 to 0 Mar 20 21:23:16.732: INFO: Waiting for statefulset status.replicas updated to 0 Mar 20 21:23:16.734: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:23:16.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9513" for this suite. • [SLOW TEST:110.598 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":51,"skipped":728,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:23:16.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:23:16.831: INFO: Creating deployment "webserver-deployment" Mar 20 21:23:16.846: INFO: Waiting for observed generation 1 Mar 20 21:23:18.856: INFO: Waiting for all required pods to come up Mar 20 21:23:18.861: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 20 21:23:28.871: INFO: Waiting for deployment "webserver-deployment" to complete Mar 20 21:23:28.878: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 20 21:23:28.884: INFO: Updating deployment webserver-deployment Mar 20 21:23:28.885: INFO: Waiting for observed generation 2 Mar 20 21:23:30.893: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 20 21:23:30.896: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 20 21:23:30.898: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 20 21:23:30.906: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 20 21:23:30.906: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 20 21:23:30.909: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 20 21:23:30.915: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 20 21:23:30.915: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 20 21:23:30.920: INFO: Updating deployment webserver-deployment Mar 20 21:23:30.920: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 20 21:23:31.203: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 20 21:23:31.327: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 20 21:23:33.905: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9106 /apis/apps/v1/namespaces/deployment-9106/deployments/webserver-deployment a3847b42-8939-4e73-bdf3-9c746b307e3d 1379177 3 2020-03-20 21:23:16 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c8d5d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-20 21:23:31 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-20 21:23:31 +0000 UTC,LastTransitionTime:2020-03-20 21:23:16 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 20 21:23:34.045: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-9106 /apis/apps/v1/namespaces/deployment-9106/replicasets/webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 1379175 3 2020-03-20 21:23:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment a3847b42-8939-4e73-bdf3-9c746b307e3d 0xc003c8daa7 0xc003c8daa8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c8db18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 20 21:23:34.045: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 20 21:23:34.045: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-9106 /apis/apps/v1/namespaces/deployment-9106/replicasets/webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 1379169 3 2020-03-20 21:23:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment a3847b42-8939-4e73-bdf3-9c746b307e3d 0xc003c8d9e7 0xc003c8d9e8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c8da48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 20 21:23:34.052: INFO: Pod "webserver-deployment-595b5b9587-47gj7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-47gj7 webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-47gj7 d1966ff3-b219-456b-a979-52bcefed358d 1378988 0 2020-03-20 21:23:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a26ab7 0xc003a26ab8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.171,StartTime:2020-03-20 21:23:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-20 21:23:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e21bff93c410c66e65a863ea516eae8f8a63679216026d7868db24198b9a3c9c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.171,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.052: INFO: Pod "webserver-deployment-595b5b9587-4kq6r" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4kq6r webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-4kq6r 9df3f362-6f3a-48ad-80e8-fa5cc9236343 1379168 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a26c37 0xc003a26c38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.052: INFO: Pod "webserver-deployment-595b5b9587-4rh58" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4rh58 webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-4rh58 a020171a-cd84-4fc9-b8e1-36b3c8347d91 1379242 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a26d97 0xc003a26d98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.052: INFO: Pod "webserver-deployment-595b5b9587-55j8t" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-55j8t webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-55j8t a6c61ec7-c2bb-4604-8575-72615f294be3 1379185 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a26ef7 0xc003a26ef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.052: INFO: Pod "webserver-deployment-595b5b9587-5ljnz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5ljnz webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-5ljnz 3ebc91b4-c1aa-4b27-bd13-b184938a8ce9 1379191 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a27057 0xc003a27058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.052: INFO: Pod "webserver-deployment-595b5b9587-6blb7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6blb7 webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-6blb7 acc4cc8a-b8fe-4504-bff9-63244234dfb4 1378927 0 2020-03-20 21:23:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a271b7 0xc003a271b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.169,StartTime:2020-03-20 21:23:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-20 21:23:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://efb7a35acfbb0f379ae49b44cb0bd067b0b409e033e260e85338c3b918748a85,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.169,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.053: INFO: Pod "webserver-deployment-595b5b9587-6p6m8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6p6m8 webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-6p6m8 a25bf746-cacd-48ed-8ba7-8307bd42ba88 1379210 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a27337 0xc003a27338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.053: INFO: Pod "webserver-deployment-595b5b9587-9tjn2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9tjn2 webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-9tjn2 2c2d6ebd-4042-45d3-a640-204a3ab40c06 1378983 0 2020-03-20 21:23:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a27497 0xc003a27498}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.172,StartTime:2020-03-20 21:23:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-20 21:23:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d3cc08da3790e30b491f238f46d45b3c7a3b4851fc972b089c7e85afa0d447f5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.172,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.053: INFO: Pod "webserver-deployment-595b5b9587-bpmcb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bpmcb webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-bpmcb ae626d8c-1eb7-4377-8716-e3f665140ed0 1379202 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a27617 0xc003a27618}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.053: INFO: Pod "webserver-deployment-595b5b9587-bvmcb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bvmcb webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-bvmcb 8df9cb4a-bc42-46d0-b039-e60034c98054 1379004 0 2020-03-20 21:23:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a27777 0xc003a27778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.185,StartTime:2020-03-20 21:23:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-20 21:23:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b7535a10f3261abd1f35d6019bcf566119706a7b3ca2a4b33d2329abb6b53a85,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.185,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.053: INFO: Pod "webserver-deployment-595b5b9587-g6bxd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g6bxd webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-g6bxd bc6b6309-bfdb-4bd5-a47a-ed59184c0679 1379008 0 2020-03-20 21:23:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a278f7 0xc003a278f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.184,StartTime:2020-03-20 21:23:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-20 21:23:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3a0f1c9082e2440d1143928a337008e8575f6930edcffe9bdcdffd45f56a1548,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.053: INFO: Pod "webserver-deployment-595b5b9587-gfczc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gfczc webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-gfczc 78825e2d-6ded-4cff-8dda-c8e5b26449ff 1379178 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a27a77 0xc003a27a78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.054: INFO: Pod "webserver-deployment-595b5b9587-hjs6l" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hjs6l webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-hjs6l b9d85250-15a8-4bc9-a273-9f3ca513eefd 1379012 0 2020-03-20 21:23:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a27bd7 0xc003a27bd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.186,StartTime:2020-03-20 21:23:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-20 21:23:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://08af19018f8f19d93873184383158d122544fb97188d2f7249ebb7ddb379640d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.186,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.054: INFO: Pod "webserver-deployment-595b5b9587-kw49k" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kw49k webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-kw49k 1c362a33-fe50-4562-9338-7530c69bed99 1378946 0 2020-03-20 21:23:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a27d57 0xc003a27d58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.182,StartTime:2020-03-20 21:23:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-20 21:23:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3e1f77b6fa58e8e29d948f07aeacf2b224cc53d0300a77a5790b57c7e8b40e4c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.182,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.054: INFO: Pod "webserver-deployment-595b5b9587-m8m8v" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m8m8v webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-m8m8v 02f2fa44-2f97-43e3-9137-8b9931e67d2a 1378950 0 2020-03-20 21:23:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc003a27ed7 0xc003a27ed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.183,StartTime:2020-03-20 21:23:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-20 21:23:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://540b158c8d02ed7ebd50297123008da615815d31e35fd2aec0c5e46e2c41f4d9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.054: INFO: Pod "webserver-deployment-595b5b9587-qv6pb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qv6pb webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-qv6pb b009010c-c036-436d-b36b-438879fc7378 1379204 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc0044f0057 0xc0044f0058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.054: INFO: Pod "webserver-deployment-595b5b9587-qwbgx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qwbgx webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-qwbgx 68fc439e-d6e7-4b2d-8171-220be7028218 1379195 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc0044f01b7 0xc0044f01b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.054: INFO: Pod "webserver-deployment-595b5b9587-sjrg8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sjrg8 webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-sjrg8 db9f38a6-4658-4684-87c2-df078c075f4a 1379208 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc0044f0317 0xc0044f0318}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.054: INFO: Pod "webserver-deployment-595b5b9587-v2pmb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v2pmb webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-v2pmb 3281638c-6fae-48ce-9337-7bef8ab4da75 1379219 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc0044f05e7 0xc0044f05e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.055: INFO: Pod "webserver-deployment-595b5b9587-vthz7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vthz7 webserver-deployment-595b5b9587- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-595b5b9587-vthz7 638950e1-e298-4eda-a19b-331d33da8d75 1379221 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 65d86682-735c-49a1-a546-035a633c3d06 0xc0044f0747 0xc0044f0748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.055: INFO: Pod "webserver-deployment-c7997dcc8-4psl6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4psl6 webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-4psl6 41fc2bc2-f833-4e39-a5b5-752c82d5adba 1379173 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f08a7 0xc0044f08a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.055: INFO: Pod "webserver-deployment-c7997dcc8-4x9f9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4x9f9 webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-4x9f9 0a241949-44f2-487f-a71e-340cfe78b188 1379189 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f0a27 0xc0044f0a28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.055: INFO: Pod "webserver-deployment-c7997dcc8-64j78" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-64j78 webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-64j78 c0db10ab-4a4f-4463-a764-ec102bc74967 1379243 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f0ba7 0xc0044f0ba8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.055: INFO: Pod "webserver-deployment-c7997dcc8-ftr7k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ftr7k webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-ftr7k 0aa1357f-0e04-41e8-a791-032ef3e4d755 1379082 0 2020-03-20 21:23:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f0d27 0xc0044f0d28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.055: INFO: Pod "webserver-deployment-c7997dcc8-gqspt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gqspt webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-gqspt e82122bc-5416-49af-931f-8251091d0e29 1379249 0 2020-03-20 21:23:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f0ea7 0xc0044f0ea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.188,StartTime:2020-03-20 21:23:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.188,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.055: INFO: Pod "webserver-deployment-c7997dcc8-gr6fd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gr6fd webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-gr6fd e4f24e8e-cdec-499e-bdc8-8ffb8710819a 1379234 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f1057 0xc0044f1058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.056: INFO: Pod "webserver-deployment-c7997dcc8-k2mtp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k2mtp webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-k2mtp b54a0794-c2b7-4ee9-9012-c80477c4c1bd 1379248 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f11d7 0xc0044f11d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.056: INFO: Pod "webserver-deployment-c7997dcc8-krczs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-krczs webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-krczs 37de9c5a-9538-423d-a18b-d085d8476864 1379106 0 2020-03-20 21:23:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f1357 0xc0044f1358}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.056: INFO: Pod "webserver-deployment-c7997dcc8-nw9dg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nw9dg webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-nw9dg 77f18304-400c-48f1-b89f-fe0f704bc292 1379256 0 2020-03-20 21:23:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f14d7 0xc0044f14d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.187,StartTime:2020-03-20 21:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.187,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.056: INFO: Pod "webserver-deployment-c7997dcc8-pnlps" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pnlps webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-pnlps ac81d7bc-0e7e-453d-9a3a-5b76e486c562 1379255 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f1687 0xc0044f1688}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.056: INFO: Pod "webserver-deployment-c7997dcc8-q8x7w" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q8x7w webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-q8x7w 8312c779-bab8-4af8-bdb2-21bc3de80e0a 1379102 0 2020-03-20 21:23:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f1807 0xc0044f1808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-20 21:23:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.056: INFO: Pod "webserver-deployment-c7997dcc8-r8mhf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r8mhf webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-r8mhf 09b089c7-ee9b-415f-835f-54ca522075c3 1379237 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f1987 0xc0044f1988}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 21:23:34.056: INFO: Pod "webserver-deployment-c7997dcc8-vlv2w" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vlv2w webserver-deployment-c7997dcc8- deployment-9106 /api/v1/namespaces/deployment-9106/pods/webserver-deployment-c7997dcc8-vlv2w 6f32f299-8e14-4ada-bc13-b82167adebe7 1379198 0 2020-03-20 21:23:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a096131-df90-426b-9264-dde5b5f0cebb 0xc0044f1b07 0xc0044f1b08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-20 21:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:23:34.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9106" for this suite. • [SLOW TEST:17.332 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":52,"skipped":745,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:23:34.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8979 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 20 21:23:35.135: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 20 21:24:07.266: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.189:8080/dial?request=hostname&protocol=http&host=10.244.1.188&port=8080&tries=1'] Namespace:pod-network-test-8979 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 21:24:07.266: INFO: >>> kubeConfig: /root/.kube/config I0320 21:24:07.301383 7 log.go:172] (0xc002770630) (0xc001505860) Create stream I0320 21:24:07.301407 7 log.go:172] (0xc002770630) (0xc001505860) Stream added, broadcasting: 1 I0320 21:24:07.303447 7 log.go:172] (0xc002770630) Reply frame received for 1 I0320 21:24:07.303517 7 log.go:172] (0xc002770630) (0xc001f41040) Create stream I0320 21:24:07.303535 7 log.go:172] (0xc002770630) (0xc001f41040) Stream added, broadcasting: 3 I0320 21:24:07.304544 7 log.go:172] (0xc002770630) Reply frame received for 3 I0320 21:24:07.304576 7 log.go:172] (0xc002770630) (0xc001505900) Create stream I0320 21:24:07.304590 7 log.go:172] (0xc002770630) (0xc001505900) Stream added, broadcasting: 5 I0320 21:24:07.305739 7 log.go:172] (0xc002770630) Reply frame received for 5 I0320 21:24:07.399121 7 log.go:172] (0xc002770630) Data frame received for 3 I0320 21:24:07.399169 7 log.go:172] (0xc001f41040) (3) Data frame handling I0320 21:24:07.399206 7 log.go:172] (0xc001f41040) (3) Data frame sent I0320 21:24:07.399764 7 log.go:172] (0xc002770630) Data frame received for 5 I0320 21:24:07.399802 7 log.go:172] (0xc001505900) (5) Data frame handling I0320 21:24:07.399841 7 log.go:172] (0xc002770630) Data frame received for 3 I0320 21:24:07.399887 7 log.go:172] (0xc001f41040) (3) Data frame handling I0320 21:24:07.402491 7 log.go:172] (0xc002770630) Data frame received for 1 I0320 21:24:07.402521 7 log.go:172] (0xc001505860) (1) Data frame handling I0320 21:24:07.402540 7 log.go:172] (0xc001505860) (1) Data frame sent I0320 21:24:07.402563 7 log.go:172] (0xc002770630) (0xc001505860) Stream removed, broadcasting: 1 I0320 21:24:07.402588 7 log.go:172] (0xc002770630) Go away received I0320 21:24:07.402759 7 log.go:172] (0xc002770630) (0xc001505860) Stream removed, broadcasting: 1 I0320 21:24:07.402793 7 log.go:172] (0xc002770630) (0xc001f41040) Stream removed, broadcasting: 3 I0320 21:24:07.402808 7 log.go:172] (0xc002770630) (0xc001505900) Stream removed, broadcasting: 5 Mar 20 21:24:07.402: INFO: Waiting for responses: map[] Mar 20 21:24:07.406: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.189:8080/dial?request=hostname&protocol=http&host=10.244.2.198&port=8080&tries=1'] Namespace:pod-network-test-8979 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 21:24:07.406: INFO: >>> kubeConfig: /root/.kube/config I0320 21:24:07.434495 7 log.go:172] (0xc001b58370) (0xc0014e54a0) Create stream I0320 21:24:07.434526 7 log.go:172] (0xc001b58370) (0xc0014e54a0) Stream added, broadcasting: 1 I0320 21:24:07.439427 7 log.go:172] (0xc001b58370) Reply frame received for 1 I0320 21:24:07.439477 7 log.go:172] (0xc001b58370) (0xc0014e57c0) Create stream I0320 21:24:07.439501 7 log.go:172] (0xc001b58370) (0xc0014e57c0) Stream added, broadcasting: 3 I0320 21:24:07.442648 7 log.go:172] (0xc001b58370) Reply frame received for 3 I0320 21:24:07.442684 7 log.go:172] (0xc001b58370) (0xc0015059a0) Create stream I0320 21:24:07.442699 7 log.go:172] (0xc001b58370) (0xc0015059a0) Stream added, broadcasting: 5 I0320 21:24:07.443545 7 log.go:172] (0xc001b58370) Reply frame received for 5 I0320 21:24:07.526990 7 log.go:172] (0xc001b58370) Data frame received for 3 I0320 21:24:07.527015 7 log.go:172] (0xc0014e57c0) (3) Data frame handling I0320 21:24:07.527033 7 log.go:172] (0xc0014e57c0) (3) Data frame sent I0320 21:24:07.527437 7 log.go:172] (0xc001b58370) Data frame received for 5 I0320 21:24:07.527502 7 log.go:172] (0xc0015059a0) (5) Data frame handling I0320 21:24:07.527637 7 log.go:172] (0xc001b58370) Data frame received for 3 I0320 21:24:07.527660 7 log.go:172] (0xc0014e57c0) (3) Data frame handling I0320 21:24:07.528897 7 log.go:172] (0xc001b58370) Data frame received for 1 I0320 21:24:07.528912 7 log.go:172] (0xc0014e54a0) (1) Data frame handling I0320 21:24:07.528920 7 log.go:172] (0xc0014e54a0) (1) Data frame sent I0320 21:24:07.528930 7 log.go:172] (0xc001b58370) (0xc0014e54a0) Stream removed, broadcasting: 1 I0320 21:24:07.528944 7 log.go:172] (0xc001b58370) Go away received I0320 21:24:07.529015 7 log.go:172] (0xc001b58370) (0xc0014e54a0) Stream removed, broadcasting: 1 I0320 21:24:07.529032 7 log.go:172] (0xc001b58370) (0xc0014e57c0) Stream removed, broadcasting: 3 I0320 21:24:07.529040 7 log.go:172] (0xc001b58370) (0xc0015059a0) Stream removed, broadcasting: 5 Mar 20 21:24:07.529: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:24:07.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8979" for this suite. • [SLOW TEST:33.449 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":791,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:24:07.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:24:23.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7916" for this suite. • [SLOW TEST:16.157 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":54,"skipped":795,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:24:23.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 20 21:24:23.755: INFO: Waiting up to 5m0s for pod "pod-6d59c426-fb83-4cfe-85e0-7f9ad06bc144" in namespace "emptydir-3534" to be "success or failure" Mar 20 21:24:23.766: INFO: Pod "pod-6d59c426-fb83-4cfe-85e0-7f9ad06bc144": Phase="Pending", Reason="", readiness=false. Elapsed: 10.817413ms Mar 20 21:24:25.769: INFO: Pod "pod-6d59c426-fb83-4cfe-85e0-7f9ad06bc144": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014597473s Mar 20 21:24:27.774: INFO: Pod "pod-6d59c426-fb83-4cfe-85e0-7f9ad06bc144": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018755789s STEP: Saw pod success Mar 20 21:24:27.774: INFO: Pod "pod-6d59c426-fb83-4cfe-85e0-7f9ad06bc144" satisfied condition "success or failure" Mar 20 21:24:27.777: INFO: Trying to get logs from node jerma-worker2 pod pod-6d59c426-fb83-4cfe-85e0-7f9ad06bc144 container test-container: STEP: delete the pod Mar 20 21:24:27.874: INFO: Waiting for pod pod-6d59c426-fb83-4cfe-85e0-7f9ad06bc144 to disappear Mar 20 21:24:27.885: INFO: Pod pod-6d59c426-fb83-4cfe-85e0-7f9ad06bc144 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:24:27.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3534" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":799,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:24:27.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 21:24:27.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcb85db4-bd69-42cf-b8ac-ee45945c5543" in namespace "projected-3284" to be "success or failure" Mar 20 21:24:28.011: INFO: Pod "downwardapi-volume-bcb85db4-bd69-42cf-b8ac-ee45945c5543": Phase="Pending", Reason="", readiness=false. Elapsed: 18.992343ms Mar 20 21:24:30.014: INFO: Pod "downwardapi-volume-bcb85db4-bd69-42cf-b8ac-ee45945c5543": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022182368s Mar 20 21:24:32.023: INFO: Pod "downwardapi-volume-bcb85db4-bd69-42cf-b8ac-ee45945c5543": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031096432s STEP: Saw pod success Mar 20 21:24:32.023: INFO: Pod "downwardapi-volume-bcb85db4-bd69-42cf-b8ac-ee45945c5543" satisfied condition "success or failure" Mar 20 21:24:32.026: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-bcb85db4-bd69-42cf-b8ac-ee45945c5543 container client-container: STEP: delete the pod Mar 20 21:24:32.106: INFO: Waiting for pod downwardapi-volume-bcb85db4-bd69-42cf-b8ac-ee45945c5543 to disappear Mar 20 21:24:32.108: INFO: Pod downwardapi-volume-bcb85db4-bd69-42cf-b8ac-ee45945c5543 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:24:32.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3284" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":815,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:24:32.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 21:24:32.176: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28d810e4-7b5c-4e81-972f-a428f2ea5537" in namespace "downward-api-3922" to be "success or failure" Mar 20 21:24:32.196: INFO: Pod "downwardapi-volume-28d810e4-7b5c-4e81-972f-a428f2ea5537": Phase="Pending", Reason="", readiness=false. Elapsed: 19.391237ms Mar 20 21:24:34.207: INFO: Pod "downwardapi-volume-28d810e4-7b5c-4e81-972f-a428f2ea5537": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030870292s Mar 20 21:24:36.211: INFO: Pod "downwardapi-volume-28d810e4-7b5c-4e81-972f-a428f2ea5537": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034406476s STEP: Saw pod success Mar 20 21:24:36.211: INFO: Pod "downwardapi-volume-28d810e4-7b5c-4e81-972f-a428f2ea5537" satisfied condition "success or failure" Mar 20 21:24:36.214: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-28d810e4-7b5c-4e81-972f-a428f2ea5537 container client-container: STEP: delete the pod Mar 20 21:24:36.257: INFO: Waiting for pod downwardapi-volume-28d810e4-7b5c-4e81-972f-a428f2ea5537 to disappear Mar 20 21:24:36.264: INFO: Pod downwardapi-volume-28d810e4-7b5c-4e81-972f-a428f2ea5537 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:24:36.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3922" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":831,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:24:36.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-09fb09a7-2981-4f1a-bb8a-45fe3e3ad13a STEP: Creating secret with name secret-projected-all-test-volume-6ad720fe-d27b-47e5-b55d-6cd89db4bd0a STEP: Creating a pod to test Check all projections for projected volume plugin Mar 20 21:24:36.351: INFO: Waiting up to 5m0s for pod "projected-volume-c3cfe08f-a46e-40d8-b7a7-ea744865c919" in namespace "projected-849" to be "success or failure" Mar 20 21:24:36.370: INFO: Pod "projected-volume-c3cfe08f-a46e-40d8-b7a7-ea744865c919": Phase="Pending", Reason="", readiness=false. Elapsed: 19.182384ms Mar 20 21:24:38.375: INFO: Pod "projected-volume-c3cfe08f-a46e-40d8-b7a7-ea744865c919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023432143s Mar 20 21:24:40.378: INFO: Pod "projected-volume-c3cfe08f-a46e-40d8-b7a7-ea744865c919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02728374s STEP: Saw pod success Mar 20 21:24:40.378: INFO: Pod "projected-volume-c3cfe08f-a46e-40d8-b7a7-ea744865c919" satisfied condition "success or failure" Mar 20 21:24:40.381: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-c3cfe08f-a46e-40d8-b7a7-ea744865c919 container projected-all-volume-test: STEP: delete the pod Mar 20 21:24:40.414: INFO: Waiting for pod projected-volume-c3cfe08f-a46e-40d8-b7a7-ea744865c919 to disappear Mar 20 21:24:40.447: INFO: Pod projected-volume-c3cfe08f-a46e-40d8-b7a7-ea744865c919 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:24:40.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-849" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":58,"skipped":844,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:24:40.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:24:56.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3695" for this suite. • [SLOW TEST:16.203 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":59,"skipped":860,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:24:56.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 21:24:57.181: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 21:24:59.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336297, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336297, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336297, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336297, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 21:25:02.223: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:25:02.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:25:03.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5411" for this suite. STEP: Destroying namespace "webhook-5411-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.773 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":60,"skipped":891,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:25:03.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3921.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3921.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 20 21:25:09.588: INFO: DNS probes using dns-test-c6bb89d8-cabd-49d8-9cfd-4a9e556725ac succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3921.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3921.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 20 21:25:15.661: INFO: File wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local from pod dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 20 21:25:15.665: INFO: File jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local from pod dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 20 21:25:15.665: INFO: Lookups using dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a failed for: [wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local] Mar 20 21:25:20.670: INFO: File wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local from pod dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 20 21:25:20.673: INFO: File jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local from pod dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 20 21:25:20.673: INFO: Lookups using dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a failed for: [wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local] Mar 20 21:25:25.670: INFO: File wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local from pod dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 20 21:25:25.673: INFO: File jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local from pod dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 20 21:25:25.673: INFO: Lookups using dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a failed for: [wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local] Mar 20 21:25:30.670: INFO: File wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local from pod dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 20 21:25:30.674: INFO: File jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local from pod dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 20 21:25:30.674: INFO: Lookups using dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a failed for: [wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local] Mar 20 21:25:35.670: INFO: File wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local from pod dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 20 21:25:35.673: INFO: File jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local from pod dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 20 21:25:35.673: INFO: Lookups using dns-3921/dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a failed for: [wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local] Mar 20 21:25:40.675: INFO: DNS probes using dns-test-360edbba-6b38-416b-bbe6-0bb8cf94991a succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3921.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3921.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3921.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3921.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 20 21:25:47.365: INFO: DNS probes using dns-test-ff2d28f0-19e5-4c1a-8dc0-06e06191c497 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:25:47.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3921" for this suite. • [SLOW TEST:44.015 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":61,"skipped":921,"failed":0} SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:25:47.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:25:47.488: INFO: Creating deployment "test-recreate-deployment" Mar 20 21:25:47.492: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 20 21:25:47.715: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 20 21:25:49.781: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 20 21:25:49.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336347, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336347, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336347, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336347, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:25:51.788: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 20 21:25:51.794: INFO: Updating deployment test-recreate-deployment Mar 20 21:25:51.794: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 20 21:25:52.063: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-95 /apis/apps/v1/namespaces/deployment-95/deployments/test-recreate-deployment 97450b57-0193-4eb5-83d2-fede8dd1a475 1380282 2 2020-03-20 21:25:47 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b78778 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-20 21:25:51 +0000 UTC,LastTransitionTime:2020-03-20 21:25:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-20 21:25:51 +0000 UTC,LastTransitionTime:2020-03-20 21:25:47 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 20 21:25:52.067: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-95 /apis/apps/v1/namespaces/deployment-95/replicasets/test-recreate-deployment-5f94c574ff a868e71c-fb76-443b-866c-7355b2fd3bd6 1380280 1 2020-03-20 21:25:51 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 97450b57-0193-4eb5-83d2-fede8dd1a475 0xc003b78b07 0xc003b78b08}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b78b68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 20 21:25:52.067: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 20 21:25:52.067: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-95 /apis/apps/v1/namespaces/deployment-95/replicasets/test-recreate-deployment-799c574856 88d1a956-c3e3-4c40-941b-34c7abf67d79 1380270 2 2020-03-20 21:25:47 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 97450b57-0193-4eb5-83d2-fede8dd1a475 0xc003b78be7 0xc003b78be8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b78c58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 20 21:25:52.239: INFO: Pod "test-recreate-deployment-5f94c574ff-s7jhp" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-s7jhp test-recreate-deployment-5f94c574ff- deployment-95 /api/v1/namespaces/deployment-95/pods/test-recreate-deployment-5f94c574ff-s7jhp dba9236d-0070-4d58-80a6-4e10c82a05e9 1380283 0 2020-03-20 21:25:51 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff a868e71c-fb76-443b-866c-7355b2fd3bd6 0xc003a27267 0xc003a27268}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4vpch,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4vpch,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4vpch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:25:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:25:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:25:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:25:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-20 21:25:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:25:52.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-95" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":62,"skipped":925,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:25:52.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 21:25:52.456: INFO: Waiting up to 5m0s for pod "downwardapi-volume-940f6e88-ab3a-4b6f-89b8-3179d9da0d0b" in namespace "projected-2946" to be "success or failure" Mar 20 21:25:52.508: INFO: Pod "downwardapi-volume-940f6e88-ab3a-4b6f-89b8-3179d9da0d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 52.446419ms Mar 20 21:25:54.525: INFO: Pod "downwardapi-volume-940f6e88-ab3a-4b6f-89b8-3179d9da0d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069165718s Mar 20 21:25:56.530: INFO: Pod "downwardapi-volume-940f6e88-ab3a-4b6f-89b8-3179d9da0d0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073816484s STEP: Saw pod success Mar 20 21:25:56.530: INFO: Pod "downwardapi-volume-940f6e88-ab3a-4b6f-89b8-3179d9da0d0b" satisfied condition "success or failure" Mar 20 21:25:56.532: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-940f6e88-ab3a-4b6f-89b8-3179d9da0d0b container client-container: STEP: delete the pod Mar 20 21:25:56.556: INFO: Waiting for pod downwardapi-volume-940f6e88-ab3a-4b6f-89b8-3179d9da0d0b to disappear Mar 20 21:25:56.578: INFO: Pod downwardapi-volume-940f6e88-ab3a-4b6f-89b8-3179d9da0d0b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:25:56.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2946" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":945,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:25:56.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 20 21:25:56.683: INFO: Waiting up to 5m0s for pod "pod-30007ef9-2062-4f84-94cb-feb2262c21c5" in namespace "emptydir-8894" to be "success or failure" Mar 20 21:25:56.692: INFO: Pod "pod-30007ef9-2062-4f84-94cb-feb2262c21c5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.432622ms Mar 20 21:25:58.696: INFO: Pod "pod-30007ef9-2062-4f84-94cb-feb2262c21c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013094916s Mar 20 21:26:00.699: INFO: Pod "pod-30007ef9-2062-4f84-94cb-feb2262c21c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015954793s STEP: Saw pod success Mar 20 21:26:00.699: INFO: Pod "pod-30007ef9-2062-4f84-94cb-feb2262c21c5" satisfied condition "success or failure" Mar 20 21:26:00.705: INFO: Trying to get logs from node jerma-worker2 pod pod-30007ef9-2062-4f84-94cb-feb2262c21c5 container test-container: STEP: delete the pod Mar 20 21:26:00.739: INFO: Waiting for pod pod-30007ef9-2062-4f84-94cb-feb2262c21c5 to disappear Mar 20 21:26:00.744: INFO: Pod pod-30007ef9-2062-4f84-94cb-feb2262c21c5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:26:00.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8894" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":953,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:26:00.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 20 21:26:00.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6199' Mar 20 21:26:03.526: INFO: stderr: "" Mar 20 21:26:03.526: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 20 21:26:03.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6199' Mar 20 21:26:03.637: INFO: stderr: "" Mar 20 21:26:03.637: INFO: stdout: "update-demo-nautilus-8crw2 update-demo-nautilus-fv7s7 " Mar 20 21:26:03.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8crw2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6199' Mar 20 21:26:03.730: INFO: stderr: "" Mar 20 21:26:03.730: INFO: stdout: "" Mar 20 21:26:03.730: INFO: update-demo-nautilus-8crw2 is created but not running Mar 20 21:26:08.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6199' Mar 20 21:26:08.823: INFO: stderr: "" Mar 20 21:26:08.823: INFO: stdout: "update-demo-nautilus-8crw2 update-demo-nautilus-fv7s7 " Mar 20 21:26:08.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8crw2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6199' Mar 20 21:26:08.927: INFO: stderr: "" Mar 20 21:26:08.927: INFO: stdout: "true" Mar 20 21:26:08.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8crw2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6199' Mar 20 21:26:09.031: INFO: stderr: "" Mar 20 21:26:09.031: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 20 21:26:09.031: INFO: validating pod update-demo-nautilus-8crw2 Mar 20 21:26:09.035: INFO: got data: { "image": "nautilus.jpg" } Mar 20 21:26:09.035: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 20 21:26:09.035: INFO: update-demo-nautilus-8crw2 is verified up and running Mar 20 21:26:09.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fv7s7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6199' Mar 20 21:26:09.123: INFO: stderr: "" Mar 20 21:26:09.123: INFO: stdout: "true" Mar 20 21:26:09.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fv7s7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6199' Mar 20 21:26:09.208: INFO: stderr: "" Mar 20 21:26:09.208: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 20 21:26:09.208: INFO: validating pod update-demo-nautilus-fv7s7 Mar 20 21:26:09.212: INFO: got data: { "image": "nautilus.jpg" } Mar 20 21:26:09.212: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 20 21:26:09.212: INFO: update-demo-nautilus-fv7s7 is verified up and running STEP: using delete to clean up resources Mar 20 21:26:09.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6199' Mar 20 21:26:09.300: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 20 21:26:09.300: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 20 21:26:09.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6199' Mar 20 21:26:09.400: INFO: stderr: "No resources found in kubectl-6199 namespace.\n" Mar 20 21:26:09.400: INFO: stdout: "" Mar 20 21:26:09.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6199 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 20 21:26:09.492: INFO: stderr: "" Mar 20 21:26:09.492: INFO: stdout: "update-demo-nautilus-8crw2\nupdate-demo-nautilus-fv7s7\n" Mar 20 21:26:09.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6199' Mar 20 21:26:10.089: INFO: stderr: "No resources found in kubectl-6199 namespace.\n" Mar 20 21:26:10.089: INFO: stdout: "" Mar 20 21:26:10.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6199 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 20 21:26:10.193: INFO: stderr: "" Mar 20 21:26:10.193: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:26:10.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6199" for this suite. • [SLOW TEST:9.450 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":65,"skipped":954,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:26:10.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 20 21:26:10.394: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 20 21:26:10.405: INFO: Waiting for terminating namespaces to be deleted... Mar 20 21:26:10.408: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 20 21:26:10.412: INFO: update-demo-nautilus-8crw2 from kubectl-6199 started at 2020-03-20 21:26:03 +0000 UTC (1 container statuses recorded) Mar 20 21:26:10.412: INFO: Container update-demo ready: true, restart count 0 Mar 20 21:26:10.412: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:26:10.412: INFO: Container kindnet-cni ready: true, restart count 0 Mar 20 21:26:10.412: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:26:10.412: INFO: Container kube-proxy ready: true, restart count 0 Mar 20 21:26:10.412: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 20 21:26:10.415: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:26:10.415: INFO: Container kindnet-cni ready: true, restart count 0 Mar 20 21:26:10.415: INFO: update-demo-nautilus-fv7s7 from kubectl-6199 started at 2020-03-20 21:26:03 +0000 UTC (1 container statuses recorded) Mar 20 21:26:10.415: INFO: Container update-demo ready: true, restart count 0 Mar 20 21:26:10.415: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:26:10.415: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3782130d-e8a9-4b96-b0b4-9e8815291bf4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3782130d-e8a9-4b96-b0b4-9e8815291bf4 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-3782130d-e8a9-4b96-b0b4-9e8815291bf4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:26:18.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6162" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.349 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":66,"skipped":954,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:26:18.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 21:26:19.332: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 21:26:21.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336379, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336379, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336379, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336379, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 21:26:24.448: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API Mar 20 21:26:24.652: INFO: Waiting for webhook configuration to be ready... STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:26:34.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-674" for this suite. STEP: Destroying namespace "webhook-674-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.395 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":67,"skipped":971,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:26:34.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-80ff54b5-5ecf-4a99-b9a3-e02ea3a4f20b STEP: Creating a pod to test consume configMaps Mar 20 21:26:35.018: INFO: Waiting up to 5m0s for pod "pod-configmaps-eecf5a04-7d78-4882-8a86-18de9ab068d3" in namespace "configmap-2322" to be "success or failure" Mar 20 21:26:35.059: INFO: Pod "pod-configmaps-eecf5a04-7d78-4882-8a86-18de9ab068d3": Phase="Pending", Reason="", readiness=false. Elapsed: 41.137484ms Mar 20 21:26:37.063: INFO: Pod "pod-configmaps-eecf5a04-7d78-4882-8a86-18de9ab068d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045233321s Mar 20 21:26:39.068: INFO: Pod "pod-configmaps-eecf5a04-7d78-4882-8a86-18de9ab068d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049679472s STEP: Saw pod success Mar 20 21:26:39.068: INFO: Pod "pod-configmaps-eecf5a04-7d78-4882-8a86-18de9ab068d3" satisfied condition "success or failure" Mar 20 21:26:39.071: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-eecf5a04-7d78-4882-8a86-18de9ab068d3 container configmap-volume-test: STEP: delete the pod Mar 20 21:26:39.088: INFO: Waiting for pod pod-configmaps-eecf5a04-7d78-4882-8a86-18de9ab068d3 to disappear Mar 20 21:26:39.093: INFO: Pod pod-configmaps-eecf5a04-7d78-4882-8a86-18de9ab068d3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:26:39.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2322" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":980,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:26:39.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-ae5f6e52-fb87-489f-9c50-885b750d8f42 in namespace container-probe-3327 Mar 20 21:26:43.169: INFO: Started pod busybox-ae5f6e52-fb87-489f-9c50-885b750d8f42 in namespace container-probe-3327 STEP: checking the pod's current state and verifying that restartCount is present Mar 20 21:26:43.172: INFO: Initial restart count of pod busybox-ae5f6e52-fb87-489f-9c50-885b750d8f42 is 0 Mar 20 21:27:31.305: INFO: Restart count of pod container-probe-3327/busybox-ae5f6e52-fb87-489f-9c50-885b750d8f42 is now 1 (48.132477276s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:27:31.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3327" for this suite. • [SLOW TEST:52.301 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":996,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:27:31.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:27:31.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 20 21:27:32.350: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-20T21:27:32Z generation:1 name:name1 resourceVersion:1380925 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4144c679-ce22-4fad-8397-1f0ff6810afc] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 20 21:27:42.355: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-20T21:27:42Z generation:1 name:name2 resourceVersion:1380971 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:24b66dfc-ecc4-4fcf-8afe-2158a467ce83] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 20 21:27:52.359: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-20T21:27:32Z generation:2 name:name1 resourceVersion:1381001 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4144c679-ce22-4fad-8397-1f0ff6810afc] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 20 21:28:02.364: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-20T21:27:42Z generation:2 name:name2 resourceVersion:1381029 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:24b66dfc-ecc4-4fcf-8afe-2158a467ce83] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 20 21:28:12.372: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-20T21:27:32Z generation:2 name:name1 resourceVersion:1381058 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4144c679-ce22-4fad-8397-1f0ff6810afc] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 20 21:28:22.380: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-20T21:27:42Z generation:2 name:name2 resourceVersion:1381088 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:24b66dfc-ecc4-4fcf-8afe-2158a467ce83] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:28:32.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9752" for this suite. • [SLOW TEST:61.559 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":70,"skipped":1001,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:28:32.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:28:33.036: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 17.548252ms) Mar 20 21:28:33.039: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.424682ms) Mar 20 21:28:33.043: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.494906ms) Mar 20 21:28:33.046: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.036463ms) Mar 20 21:28:33.049: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.541502ms) Mar 20 21:28:33.053: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.585877ms) Mar 20 21:28:33.056: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.127191ms) Mar 20 21:28:33.059: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.081953ms) Mar 20 21:28:33.080: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 20.221821ms) Mar 20 21:28:33.083: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.547644ms) Mar 20 21:28:33.087: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.270375ms) Mar 20 21:28:33.090: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.52264ms) Mar 20 21:28:33.093: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.236004ms) Mar 20 21:28:33.097: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.638645ms) Mar 20 21:28:33.101: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.766499ms) Mar 20 21:28:33.105: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.671593ms) Mar 20 21:28:33.108: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.533983ms) Mar 20 21:28:33.112: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.708966ms) Mar 20 21:28:33.115: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.437812ms) Mar 20 21:28:33.119: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.306107ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:28:33.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3347" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":71,"skipped":1008,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:28:33.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:28:33.196: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ff49d5e2-1bcf-4f58-b022-465cd2b35250" in namespace "security-context-test-2541" to be "success or failure" Mar 20 21:28:33.204: INFO: Pod "busybox-user-65534-ff49d5e2-1bcf-4f58-b022-465cd2b35250": Phase="Pending", Reason="", readiness=false. Elapsed: 7.992521ms Mar 20 21:28:35.208: INFO: Pod "busybox-user-65534-ff49d5e2-1bcf-4f58-b022-465cd2b35250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011965462s Mar 20 21:28:37.212: INFO: Pod "busybox-user-65534-ff49d5e2-1bcf-4f58-b022-465cd2b35250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016694396s Mar 20 21:28:37.212: INFO: Pod "busybox-user-65534-ff49d5e2-1bcf-4f58-b022-465cd2b35250" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:28:37.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2541" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1057,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:28:37.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3364 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 20 21:28:37.289: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 20 21:28:57.393: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.199 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3364 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 21:28:57.393: INFO: >>> kubeConfig: /root/.kube/config I0320 21:28:57.431336 7 log.go:172] (0xc0042784d0) (0xc0012d2be0) Create stream I0320 21:28:57.431378 7 log.go:172] (0xc0042784d0) (0xc0012d2be0) Stream added, broadcasting: 1 I0320 21:28:57.433617 7 log.go:172] (0xc0042784d0) Reply frame received for 1 I0320 21:28:57.433644 7 log.go:172] (0xc0042784d0) (0xc001eb8780) Create stream I0320 21:28:57.433654 7 log.go:172] (0xc0042784d0) (0xc001eb8780) Stream added, broadcasting: 3 I0320 21:28:57.434468 7 log.go:172] (0xc0042784d0) Reply frame received for 3 I0320 21:28:57.434500 7 log.go:172] (0xc0042784d0) (0xc001792460) Create stream I0320 21:28:57.434510 7 log.go:172] (0xc0042784d0) (0xc001792460) Stream added, broadcasting: 5 I0320 21:28:57.435216 7 log.go:172] (0xc0042784d0) Reply frame received for 5 I0320 21:28:58.488744 7 log.go:172] (0xc0042784d0) Data frame received for 3 I0320 21:28:58.488791 7 log.go:172] (0xc001eb8780) (3) Data frame handling I0320 21:28:58.488825 7 log.go:172] (0xc0042784d0) Data frame received for 5 I0320 21:28:58.488867 7 log.go:172] (0xc001792460) (5) Data frame handling I0320 21:28:58.488895 7 log.go:172] (0xc001eb8780) (3) Data frame sent I0320 21:28:58.489257 7 log.go:172] (0xc0042784d0) Data frame received for 3 I0320 21:28:58.489282 7 log.go:172] (0xc001eb8780) (3) Data frame handling I0320 21:28:58.490948 7 log.go:172] (0xc0042784d0) Data frame received for 1 I0320 21:28:58.490968 7 log.go:172] (0xc0012d2be0) (1) Data frame handling I0320 21:28:58.490976 7 log.go:172] (0xc0012d2be0) (1) Data frame sent I0320 21:28:58.490988 7 log.go:172] (0xc0042784d0) (0xc0012d2be0) Stream removed, broadcasting: 1 I0320 21:28:58.491009 7 log.go:172] (0xc0042784d0) Go away received I0320 21:28:58.491167 7 log.go:172] (0xc0042784d0) (0xc0012d2be0) Stream removed, broadcasting: 1 I0320 21:28:58.491193 7 log.go:172] (0xc0042784d0) (0xc001eb8780) Stream removed, broadcasting: 3 I0320 21:28:58.491203 7 log.go:172] (0xc0042784d0) (0xc001792460) Stream removed, broadcasting: 5 Mar 20 21:28:58.491: INFO: Found all expected endpoints: [netserver-0] Mar 20 21:28:58.505: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.213 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3364 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 21:28:58.505: INFO: >>> kubeConfig: /root/.kube/config I0320 21:28:58.539036 7 log.go:172] (0xc002a18dc0) (0xc001792960) Create stream I0320 21:28:58.539068 7 log.go:172] (0xc002a18dc0) (0xc001792960) Stream added, broadcasting: 1 I0320 21:28:58.541534 7 log.go:172] (0xc002a18dc0) Reply frame received for 1 I0320 21:28:58.541585 7 log.go:172] (0xc002a18dc0) (0xc001792a00) Create stream I0320 21:28:58.541610 7 log.go:172] (0xc002a18dc0) (0xc001792a00) Stream added, broadcasting: 3 I0320 21:28:58.542691 7 log.go:172] (0xc002a18dc0) Reply frame received for 3 I0320 21:28:58.542732 7 log.go:172] (0xc002a18dc0) (0xc0017e41e0) Create stream I0320 21:28:58.542748 7 log.go:172] (0xc002a18dc0) (0xc0017e41e0) Stream added, broadcasting: 5 I0320 21:28:58.543807 7 log.go:172] (0xc002a18dc0) Reply frame received for 5 I0320 21:28:59.621976 7 log.go:172] (0xc002a18dc0) Data frame received for 5 I0320 21:28:59.622033 7 log.go:172] (0xc0017e41e0) (5) Data frame handling I0320 21:28:59.622061 7 log.go:172] (0xc002a18dc0) Data frame received for 3 I0320 21:28:59.622076 7 log.go:172] (0xc001792a00) (3) Data frame handling I0320 21:28:59.622103 7 log.go:172] (0xc001792a00) (3) Data frame sent I0320 21:28:59.622249 7 log.go:172] (0xc002a18dc0) Data frame received for 3 I0320 21:28:59.622280 7 log.go:172] (0xc001792a00) (3) Data frame handling I0320 21:28:59.624224 7 log.go:172] (0xc002a18dc0) Data frame received for 1 I0320 21:28:59.624250 7 log.go:172] (0xc001792960) (1) Data frame handling I0320 21:28:59.624263 7 log.go:172] (0xc001792960) (1) Data frame sent I0320 21:28:59.624284 7 log.go:172] (0xc002a18dc0) (0xc001792960) Stream removed, broadcasting: 1 I0320 21:28:59.624308 7 log.go:172] (0xc002a18dc0) Go away received I0320 21:28:59.624407 7 log.go:172] (0xc002a18dc0) (0xc001792960) Stream removed, broadcasting: 1 I0320 21:28:59.624437 7 log.go:172] (0xc002a18dc0) (0xc001792a00) Stream removed, broadcasting: 3 I0320 21:28:59.624460 7 log.go:172] (0xc002a18dc0) (0xc0017e41e0) Stream removed, broadcasting: 5 Mar 20 21:28:59.624: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:28:59.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3364" for this suite. • [SLOW TEST:22.412 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:28:59.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:29:03.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8915" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1106,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:29:03.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 20 21:29:11.942: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 20 21:29:11.967: INFO: Pod pod-with-poststart-http-hook still exists Mar 20 21:29:13.967: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 20 21:29:13.971: INFO: Pod pod-with-poststart-http-hook still exists Mar 20 21:29:15.968: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 20 21:29:15.972: INFO: Pod pod-with-poststart-http-hook still exists Mar 20 21:29:17.968: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 20 21:29:17.972: INFO: Pod pod-with-poststart-http-hook still exists Mar 20 21:29:19.967: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 20 21:29:19.971: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:29:19.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-512" for this suite. • [SLOW TEST:16.255 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:29:19.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:29:20.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1159" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":76,"skipped":1152,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:29:20.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 20 21:29:20.230: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:29:20.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6016" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":77,"skipped":1169,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:29:20.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 20 21:29:27.313: INFO: 0 pods remaining Mar 20 21:29:27.313: INFO: 0 pods has nil DeletionTimestamp Mar 20 21:29:27.313: INFO: STEP: Gathering metrics W0320 21:29:28.246925 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 20 21:29:28.246: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:29:28.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8265" for this suite. • [SLOW TEST:8.228 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":78,"skipped":1175,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:29:28.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 21:29:30.344: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 21:29:32.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336570, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336570, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336570, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336570, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 21:29:35.388: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:29:35.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9472" for this suite. STEP: Destroying namespace "webhook-9472-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.019 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":79,"skipped":1175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:29:35.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7237 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7237 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7237 Mar 20 21:29:35.703: INFO: Found 0 stateful pods, waiting for 1 Mar 20 21:29:45.711: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 20 21:29:45.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 20 21:29:45.971: INFO: stderr: "I0320 21:29:45.831913 579 log.go:172] (0xc000a7f080) (0xc000a5a460) Create stream\nI0320 21:29:45.831965 579 log.go:172] (0xc000a7f080) (0xc000a5a460) Stream added, broadcasting: 1\nI0320 21:29:45.836206 579 log.go:172] (0xc000a7f080) Reply frame received for 1\nI0320 21:29:45.836251 579 log.go:172] (0xc000a7f080) (0xc0007106e0) Create stream\nI0320 21:29:45.836264 579 log.go:172] (0xc000a7f080) (0xc0007106e0) Stream added, broadcasting: 3\nI0320 21:29:45.837101 579 log.go:172] (0xc000a7f080) Reply frame received for 3\nI0320 21:29:45.837249 579 log.go:172] (0xc000a7f080) (0xc0005414a0) Create stream\nI0320 21:29:45.837261 579 log.go:172] (0xc000a7f080) (0xc0005414a0) Stream added, broadcasting: 5\nI0320 21:29:45.838154 579 log.go:172] (0xc000a7f080) Reply frame received for 5\nI0320 21:29:45.890455 579 log.go:172] (0xc000a7f080) Data frame received for 5\nI0320 21:29:45.890483 579 log.go:172] (0xc0005414a0) (5) Data frame handling\nI0320 21:29:45.890498 579 log.go:172] (0xc0005414a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0320 21:29:45.964173 579 log.go:172] (0xc000a7f080) Data frame received for 3\nI0320 21:29:45.964226 579 log.go:172] (0xc0007106e0) (3) Data frame handling\nI0320 21:29:45.964252 579 log.go:172] (0xc0007106e0) (3) Data frame sent\nI0320 21:29:45.964341 579 log.go:172] (0xc000a7f080) Data frame received for 5\nI0320 21:29:45.964377 579 log.go:172] (0xc0005414a0) (5) Data frame handling\nI0320 21:29:45.964532 579 log.go:172] (0xc000a7f080) Data frame received for 3\nI0320 21:29:45.964556 579 log.go:172] (0xc0007106e0) (3) Data frame handling\nI0320 21:29:45.966633 579 log.go:172] (0xc000a7f080) Data frame received for 1\nI0320 21:29:45.966664 579 log.go:172] (0xc000a5a460) (1) Data frame handling\nI0320 21:29:45.966686 579 log.go:172] (0xc000a5a460) (1) Data frame sent\nI0320 21:29:45.966712 579 log.go:172] (0xc000a7f080) (0xc000a5a460) Stream removed, broadcasting: 1\nI0320 21:29:45.966874 579 log.go:172] (0xc000a7f080) Go away received\nI0320 21:29:45.967134 579 log.go:172] (0xc000a7f080) (0xc000a5a460) Stream removed, broadcasting: 1\nI0320 21:29:45.967166 579 log.go:172] (0xc000a7f080) (0xc0007106e0) Stream removed, broadcasting: 3\nI0320 21:29:45.967186 579 log.go:172] (0xc000a7f080) (0xc0005414a0) Stream removed, broadcasting: 5\n" Mar 20 21:29:45.971: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 20 21:29:45.971: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 20 21:29:45.996: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 20 21:29:56.001: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 20 21:29:56.001: INFO: Waiting for statefulset status.replicas updated to 0 Mar 20 21:29:56.026: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999957s Mar 20 21:29:57.031: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.983721366s Mar 20 21:29:58.035: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.979200075s Mar 20 21:29:59.040: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.974704411s Mar 20 21:30:00.044: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.970247008s Mar 20 21:30:01.048: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.965853542s Mar 20 21:30:02.053: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.961698882s Mar 20 21:30:03.057: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.95678582s Mar 20 21:30:04.061: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.952863345s Mar 20 21:30:05.064: INFO: Verifying statefulset ss doesn't scale past 1 for another 949.096579ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7237 Mar 20 21:30:06.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 21:30:06.568: INFO: stderr: "I0320 21:30:06.467417 599 log.go:172] (0xc0003c1130) (0xc0006e7b80) Create stream\nI0320 21:30:06.467467 599 log.go:172] (0xc0003c1130) (0xc0006e7b80) Stream added, broadcasting: 1\nI0320 21:30:06.469918 599 log.go:172] (0xc0003c1130) Reply frame received for 1\nI0320 21:30:06.469958 599 log.go:172] (0xc0003c1130) (0xc0006e7d60) Create stream\nI0320 21:30:06.469969 599 log.go:172] (0xc0003c1130) (0xc0006e7d60) Stream added, broadcasting: 3\nI0320 21:30:06.470871 599 log.go:172] (0xc0003c1130) Reply frame received for 3\nI0320 21:30:06.470920 599 log.go:172] (0xc0003c1130) (0xc0009be000) Create stream\nI0320 21:30:06.470932 599 log.go:172] (0xc0003c1130) (0xc0009be000) Stream added, broadcasting: 5\nI0320 21:30:06.471882 599 log.go:172] (0xc0003c1130) Reply frame received for 5\nI0320 21:30:06.562139 599 log.go:172] (0xc0003c1130) Data frame received for 3\nI0320 21:30:06.562183 599 log.go:172] (0xc0006e7d60) (3) Data frame handling\nI0320 21:30:06.562196 599 log.go:172] (0xc0006e7d60) (3) Data frame sent\nI0320 21:30:06.562208 599 log.go:172] (0xc0003c1130) Data frame received for 3\nI0320 21:30:06.562216 599 log.go:172] (0xc0006e7d60) (3) Data frame handling\nI0320 21:30:06.562254 599 log.go:172] (0xc0003c1130) Data frame received for 5\nI0320 21:30:06.562272 599 log.go:172] (0xc0009be000) (5) Data frame handling\nI0320 21:30:06.562280 599 log.go:172] (0xc0009be000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0320 21:30:06.562625 599 log.go:172] (0xc0003c1130) Data frame received for 5\nI0320 21:30:06.562649 599 log.go:172] (0xc0009be000) (5) Data frame handling\nI0320 21:30:06.563699 599 log.go:172] (0xc0003c1130) Data frame received for 1\nI0320 21:30:06.563737 599 log.go:172] (0xc0006e7b80) (1) Data frame handling\nI0320 21:30:06.563769 599 log.go:172] (0xc0006e7b80) (1) Data frame sent\nI0320 21:30:06.563867 599 log.go:172] (0xc0003c1130) (0xc0006e7b80) Stream removed, broadcasting: 1\nI0320 21:30:06.563928 599 log.go:172] (0xc0003c1130) Go away received\nI0320 21:30:06.564213 599 log.go:172] (0xc0003c1130) (0xc0006e7b80) Stream removed, broadcasting: 1\nI0320 21:30:06.564242 599 log.go:172] (0xc0003c1130) (0xc0006e7d60) Stream removed, broadcasting: 3\nI0320 21:30:06.564251 599 log.go:172] (0xc0003c1130) (0xc0009be000) Stream removed, broadcasting: 5\n" Mar 20 21:30:06.568: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 20 21:30:06.568: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 20 21:30:06.572: INFO: Found 1 stateful pods, waiting for 3 Mar 20 21:30:16.577: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 20 21:30:16.577: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 20 21:30:16.577: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 20 21:30:16.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 20 21:30:16.799: INFO: stderr: "I0320 21:30:16.712301 620 log.go:172] (0xc0009b0790) (0xc0006adae0) Create stream\nI0320 21:30:16.712353 620 log.go:172] (0xc0009b0790) (0xc0006adae0) Stream added, broadcasting: 1\nI0320 21:30:16.714804 620 log.go:172] (0xc0009b0790) Reply frame received for 1\nI0320 21:30:16.714846 620 log.go:172] (0xc0009b0790) (0xc000918000) Create stream\nI0320 21:30:16.714862 620 log.go:172] (0xc0009b0790) (0xc000918000) Stream added, broadcasting: 3\nI0320 21:30:16.715849 620 log.go:172] (0xc0009b0790) Reply frame received for 3\nI0320 21:30:16.715899 620 log.go:172] (0xc0009b0790) (0xc000230000) Create stream\nI0320 21:30:16.715916 620 log.go:172] (0xc0009b0790) (0xc000230000) Stream added, broadcasting: 5\nI0320 21:30:16.716679 620 log.go:172] (0xc0009b0790) Reply frame received for 5\nI0320 21:30:16.792824 620 log.go:172] (0xc0009b0790) Data frame received for 5\nI0320 21:30:16.792880 620 log.go:172] (0xc000230000) (5) Data frame handling\nI0320 21:30:16.792918 620 log.go:172] (0xc000230000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0320 21:30:16.792957 620 log.go:172] (0xc0009b0790) Data frame received for 5\nI0320 21:30:16.792977 620 log.go:172] (0xc000230000) (5) Data frame handling\nI0320 21:30:16.793018 620 log.go:172] (0xc0009b0790) Data frame received for 3\nI0320 21:30:16.793048 620 log.go:172] (0xc000918000) (3) Data frame handling\nI0320 21:30:16.793075 620 log.go:172] (0xc000918000) (3) Data frame sent\nI0320 21:30:16.793090 620 log.go:172] (0xc0009b0790) Data frame received for 3\nI0320 21:30:16.793102 620 log.go:172] (0xc000918000) (3) Data frame handling\nI0320 21:30:16.794935 620 log.go:172] (0xc0009b0790) Data frame received for 1\nI0320 21:30:16.794959 620 log.go:172] (0xc0006adae0) (1) Data frame handling\nI0320 21:30:16.794974 620 log.go:172] (0xc0006adae0) (1) Data frame sent\nI0320 21:30:16.794993 620 log.go:172] (0xc0009b0790) (0xc0006adae0) Stream removed, broadcasting: 1\nI0320 21:30:16.795051 620 log.go:172] (0xc0009b0790) Go away received\nI0320 21:30:16.795410 620 log.go:172] (0xc0009b0790) (0xc0006adae0) Stream removed, broadcasting: 1\nI0320 21:30:16.795436 620 log.go:172] (0xc0009b0790) (0xc000918000) Stream removed, broadcasting: 3\nI0320 21:30:16.795456 620 log.go:172] (0xc0009b0790) (0xc000230000) Stream removed, broadcasting: 5\n" Mar 20 21:30:16.800: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 20 21:30:16.800: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 20 21:30:16.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 20 21:30:17.037: INFO: stderr: "I0320 21:30:16.955673 642 log.go:172] (0xc000a2e000) (0xc000bd6000) Create stream\nI0320 21:30:16.955751 642 log.go:172] (0xc000a2e000) (0xc000bd6000) Stream added, broadcasting: 1\nI0320 21:30:16.958748 642 log.go:172] (0xc000a2e000) Reply frame received for 1\nI0320 21:30:16.958800 642 log.go:172] (0xc000a2e000) (0xc000246000) Create stream\nI0320 21:30:16.958816 642 log.go:172] (0xc000a2e000) (0xc000246000) Stream added, broadcasting: 3\nI0320 21:30:16.959781 642 log.go:172] (0xc000a2e000) Reply frame received for 3\nI0320 21:30:16.959818 642 log.go:172] (0xc000a2e000) (0xc000bd60a0) Create stream\nI0320 21:30:16.959830 642 log.go:172] (0xc000a2e000) (0xc000bd60a0) Stream added, broadcasting: 5\nI0320 21:30:16.960674 642 log.go:172] (0xc000a2e000) Reply frame received for 5\nI0320 21:30:17.004564 642 log.go:172] (0xc000a2e000) Data frame received for 5\nI0320 21:30:17.004602 642 log.go:172] (0xc000bd60a0) (5) Data frame handling\nI0320 21:30:17.004631 642 log.go:172] (0xc000bd60a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0320 21:30:17.029954 642 log.go:172] (0xc000a2e000) Data frame received for 3\nI0320 21:30:17.030062 642 log.go:172] (0xc000246000) (3) Data frame handling\nI0320 21:30:17.030159 642 log.go:172] (0xc000246000) (3) Data frame sent\nI0320 21:30:17.030194 642 log.go:172] (0xc000a2e000) Data frame received for 5\nI0320 21:30:17.030213 642 log.go:172] (0xc000bd60a0) (5) Data frame handling\nI0320 21:30:17.030481 642 log.go:172] (0xc000a2e000) Data frame received for 3\nI0320 21:30:17.030499 642 log.go:172] (0xc000246000) (3) Data frame handling\nI0320 21:30:17.032626 642 log.go:172] (0xc000a2e000) Data frame received for 1\nI0320 21:30:17.032639 642 log.go:172] (0xc000bd6000) (1) Data frame handling\nI0320 21:30:17.032657 642 log.go:172] (0xc000bd6000) (1) Data frame sent\nI0320 21:30:17.032766 642 log.go:172] (0xc000a2e000) (0xc000bd6000) Stream removed, broadcasting: 1\nI0320 21:30:17.032982 642 log.go:172] (0xc000a2e000) Go away received\nI0320 21:30:17.033387 642 log.go:172] (0xc000a2e000) (0xc000bd6000) Stream removed, broadcasting: 1\nI0320 21:30:17.033415 642 log.go:172] (0xc000a2e000) (0xc000246000) Stream removed, broadcasting: 3\nI0320 21:30:17.033435 642 log.go:172] (0xc000a2e000) (0xc000bd60a0) Stream removed, broadcasting: 5\n" Mar 20 21:30:17.038: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 20 21:30:17.038: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 20 21:30:17.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 20 21:30:17.296: INFO: stderr: "I0320 21:30:17.176787 663 log.go:172] (0xc000a2c630) (0xc000ade280) Create stream\nI0320 21:30:17.176863 663 log.go:172] (0xc000a2c630) (0xc000ade280) Stream added, broadcasting: 1\nI0320 21:30:17.180140 663 log.go:172] (0xc000a2c630) Reply frame received for 1\nI0320 21:30:17.180178 663 log.go:172] (0xc000a2c630) (0xc0009de000) Create stream\nI0320 21:30:17.180197 663 log.go:172] (0xc000a2c630) (0xc0009de000) Stream added, broadcasting: 3\nI0320 21:30:17.181798 663 log.go:172] (0xc000a2c630) Reply frame received for 3\nI0320 21:30:17.181840 663 log.go:172] (0xc000a2c630) (0xc000a48280) Create stream\nI0320 21:30:17.181863 663 log.go:172] (0xc000a2c630) (0xc000a48280) Stream added, broadcasting: 5\nI0320 21:30:17.182825 663 log.go:172] (0xc000a2c630) Reply frame received for 5\nI0320 21:30:17.235659 663 log.go:172] (0xc000a2c630) Data frame received for 5\nI0320 21:30:17.235684 663 log.go:172] (0xc000a48280) (5) Data frame handling\nI0320 21:30:17.235710 663 log.go:172] (0xc000a48280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0320 21:30:17.290551 663 log.go:172] (0xc000a2c630) Data frame received for 3\nI0320 21:30:17.290574 663 log.go:172] (0xc0009de000) (3) Data frame handling\nI0320 21:30:17.290595 663 log.go:172] (0xc0009de000) (3) Data frame sent\nI0320 21:30:17.290604 663 log.go:172] (0xc000a2c630) Data frame received for 3\nI0320 21:30:17.290610 663 log.go:172] (0xc0009de000) (3) Data frame handling\nI0320 21:30:17.290719 663 log.go:172] (0xc000a2c630) Data frame received for 5\nI0320 21:30:17.290743 663 log.go:172] (0xc000a48280) (5) Data frame handling\nI0320 21:30:17.292501 663 log.go:172] (0xc000a2c630) Data frame received for 1\nI0320 21:30:17.292533 663 log.go:172] (0xc000ade280) (1) Data frame handling\nI0320 21:30:17.292565 663 log.go:172] (0xc000ade280) (1) Data frame sent\nI0320 21:30:17.292594 663 log.go:172] (0xc000a2c630) (0xc000ade280) Stream removed, broadcasting: 1\nI0320 21:30:17.292636 663 log.go:172] (0xc000a2c630) Go away received\nI0320 21:30:17.293045 663 log.go:172] (0xc000a2c630) (0xc000ade280) Stream removed, broadcasting: 1\nI0320 21:30:17.293057 663 log.go:172] (0xc000a2c630) (0xc0009de000) Stream removed, broadcasting: 3\nI0320 21:30:17.293063 663 log.go:172] (0xc000a2c630) (0xc000a48280) Stream removed, broadcasting: 5\n" Mar 20 21:30:17.296: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 20 21:30:17.296: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 20 21:30:17.296: INFO: Waiting for statefulset status.replicas updated to 0 Mar 20 21:30:17.302: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 20 21:30:27.309: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 20 21:30:27.309: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 20 21:30:27.309: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 20 21:30:27.341: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999645s Mar 20 21:30:28.346: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.978695262s Mar 20 21:30:29.350: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.973983226s Mar 20 21:30:30.357: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969749194s Mar 20 21:30:31.363: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.963044583s Mar 20 21:30:32.399: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.9569092s Mar 20 21:30:33.406: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.920747385s Mar 20 21:30:34.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.913843437s Mar 20 21:30:35.435: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.908891525s Mar 20 21:30:36.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 884.868731ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7237 Mar 20 21:30:37.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 21:30:37.634: INFO: stderr: "I0320 21:30:37.578065 684 log.go:172] (0xc000628a50) (0xc000622000) Create stream\nI0320 21:30:37.578109 684 log.go:172] (0xc000628a50) (0xc000622000) Stream added, broadcasting: 1\nI0320 21:30:37.580291 684 log.go:172] (0xc000628a50) Reply frame received for 1\nI0320 21:30:37.580349 684 log.go:172] (0xc000628a50) (0xc00065dae0) Create stream\nI0320 21:30:37.580374 684 log.go:172] (0xc000628a50) (0xc00065dae0) Stream added, broadcasting: 3\nI0320 21:30:37.581494 684 log.go:172] (0xc000628a50) Reply frame received for 3\nI0320 21:30:37.581515 684 log.go:172] (0xc000628a50) (0xc000622140) Create stream\nI0320 21:30:37.581521 684 log.go:172] (0xc000628a50) (0xc000622140) Stream added, broadcasting: 5\nI0320 21:30:37.582311 684 log.go:172] (0xc000628a50) Reply frame received for 5\nI0320 21:30:37.628109 684 log.go:172] (0xc000628a50) Data frame received for 5\nI0320 21:30:37.628143 684 log.go:172] (0xc000622140) (5) Data frame handling\nI0320 21:30:37.628168 684 log.go:172] (0xc000622140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0320 21:30:37.628181 684 log.go:172] (0xc000628a50) Data frame received for 5\nI0320 21:30:37.628219 684 log.go:172] (0xc000622140) (5) Data frame handling\nI0320 21:30:37.628242 684 log.go:172] (0xc000628a50) Data frame received for 3\nI0320 21:30:37.628248 684 log.go:172] (0xc00065dae0) (3) Data frame handling\nI0320 21:30:37.628261 684 log.go:172] (0xc00065dae0) (3) Data frame sent\nI0320 21:30:37.628270 684 log.go:172] (0xc000628a50) Data frame received for 3\nI0320 21:30:37.628277 684 log.go:172] (0xc00065dae0) (3) Data frame handling\nI0320 21:30:37.629772 684 log.go:172] (0xc000628a50) Data frame received for 1\nI0320 21:30:37.629794 684 log.go:172] (0xc000622000) (1) Data frame handling\nI0320 21:30:37.629813 684 log.go:172] (0xc000622000) (1) Data frame sent\nI0320 21:30:37.629830 684 log.go:172] (0xc000628a50) (0xc000622000) Stream removed, broadcasting: 1\nI0320 21:30:37.629845 684 log.go:172] (0xc000628a50) Go away received\nI0320 21:30:37.630285 684 log.go:172] (0xc000628a50) (0xc000622000) Stream removed, broadcasting: 1\nI0320 21:30:37.630306 684 log.go:172] (0xc000628a50) (0xc00065dae0) Stream removed, broadcasting: 3\nI0320 21:30:37.630322 684 log.go:172] (0xc000628a50) (0xc000622140) Stream removed, broadcasting: 5\n" Mar 20 21:30:37.634: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 20 21:30:37.634: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 20 21:30:37.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 21:30:37.869: INFO: stderr: "I0320 21:30:37.793390 704 log.go:172] (0xc0000f4bb0) (0xc0006d5b80) Create stream\nI0320 21:30:37.793438 704 log.go:172] (0xc0000f4bb0) (0xc0006d5b80) Stream added, broadcasting: 1\nI0320 21:30:37.795934 704 log.go:172] (0xc0000f4bb0) Reply frame received for 1\nI0320 21:30:37.795980 704 log.go:172] (0xc0000f4bb0) (0xc0008d0000) Create stream\nI0320 21:30:37.795995 704 log.go:172] (0xc0000f4bb0) (0xc0008d0000) Stream added, broadcasting: 3\nI0320 21:30:37.797036 704 log.go:172] (0xc0000f4bb0) Reply frame received for 3\nI0320 21:30:37.797081 704 log.go:172] (0xc0000f4bb0) (0xc0006d5d60) Create stream\nI0320 21:30:37.797097 704 log.go:172] (0xc0000f4bb0) (0xc0006d5d60) Stream added, broadcasting: 5\nI0320 21:30:37.798302 704 log.go:172] (0xc0000f4bb0) Reply frame received for 5\nI0320 21:30:37.863851 704 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0320 21:30:37.863891 704 log.go:172] (0xc0008d0000) (3) Data frame handling\nI0320 21:30:37.863924 704 log.go:172] (0xc0008d0000) (3) Data frame sent\nI0320 21:30:37.863941 704 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0320 21:30:37.863954 704 log.go:172] (0xc0008d0000) (3) Data frame handling\nI0320 21:30:37.864012 704 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0320 21:30:37.864026 704 log.go:172] (0xc0006d5d60) (5) Data frame handling\nI0320 21:30:37.864044 704 log.go:172] (0xc0006d5d60) (5) Data frame sent\nI0320 21:30:37.864053 704 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0320 21:30:37.864062 704 log.go:172] (0xc0006d5d60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0320 21:30:37.865703 704 log.go:172] (0xc0000f4bb0) Data frame received for 1\nI0320 21:30:37.865735 704 log.go:172] (0xc0006d5b80) (1) Data frame handling\nI0320 21:30:37.865761 704 log.go:172] (0xc0006d5b80) (1) Data frame sent\nI0320 21:30:37.865784 704 log.go:172] (0xc0000f4bb0) (0xc0006d5b80) Stream removed, broadcasting: 1\nI0320 21:30:37.865906 704 log.go:172] (0xc0000f4bb0) Go away received\nI0320 21:30:37.866255 704 log.go:172] (0xc0000f4bb0) (0xc0006d5b80) Stream removed, broadcasting: 1\nI0320 21:30:37.866284 704 log.go:172] (0xc0000f4bb0) (0xc0008d0000) Stream removed, broadcasting: 3\nI0320 21:30:37.866301 704 log.go:172] (0xc0000f4bb0) (0xc0006d5d60) Stream removed, broadcasting: 5\n" Mar 20 21:30:37.869: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 20 21:30:37.869: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 20 21:30:37.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 21:30:38.071: INFO: stderr: "I0320 21:30:37.998692 724 log.go:172] (0xc000505130) (0xc00067fea0) Create stream\nI0320 21:30:37.998751 724 log.go:172] (0xc000505130) (0xc00067fea0) Stream added, broadcasting: 1\nI0320 21:30:38.001694 724 log.go:172] (0xc000505130) Reply frame received for 1\nI0320 21:30:38.001748 724 log.go:172] (0xc000505130) (0xc0005da780) Create stream\nI0320 21:30:38.001763 724 log.go:172] (0xc000505130) (0xc0005da780) Stream added, broadcasting: 3\nI0320 21:30:38.002999 724 log.go:172] (0xc000505130) Reply frame received for 3\nI0320 21:30:38.003056 724 log.go:172] (0xc000505130) (0xc00067ff40) Create stream\nI0320 21:30:38.003075 724 log.go:172] (0xc000505130) (0xc00067ff40) Stream added, broadcasting: 5\nI0320 21:30:38.004138 724 log.go:172] (0xc000505130) Reply frame received for 5\nI0320 21:30:38.064516 724 log.go:172] (0xc000505130) Data frame received for 3\nI0320 21:30:38.064566 724 log.go:172] (0xc0005da780) (3) Data frame handling\nI0320 21:30:38.064606 724 log.go:172] (0xc0005da780) (3) Data frame sent\nI0320 21:30:38.064996 724 log.go:172] (0xc000505130) Data frame received for 5\nI0320 21:30:38.065032 724 log.go:172] (0xc00067ff40) (5) Data frame handling\nI0320 21:30:38.065056 724 log.go:172] (0xc00067ff40) (5) Data frame sent\nI0320 21:30:38.065080 724 log.go:172] (0xc000505130) Data frame received for 5\nI0320 21:30:38.065102 724 log.go:172] (0xc00067ff40) (5) Data frame handling\nI0320 21:30:38.065256 724 log.go:172] (0xc000505130) Data frame received for 3\nI0320 21:30:38.065279 724 log.go:172] (0xc0005da780) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0320 21:30:38.066696 724 log.go:172] (0xc000505130) Data frame received for 1\nI0320 21:30:38.066729 724 log.go:172] (0xc00067fea0) (1) Data frame handling\nI0320 21:30:38.066770 724 log.go:172] (0xc00067fea0) (1) Data frame sent\nI0320 21:30:38.066820 724 log.go:172] (0xc000505130) (0xc00067fea0) Stream removed, broadcasting: 1\nI0320 21:30:38.066871 724 log.go:172] (0xc000505130) Go away received\nI0320 21:30:38.067218 724 log.go:172] (0xc000505130) (0xc00067fea0) Stream removed, broadcasting: 1\nI0320 21:30:38.067244 724 log.go:172] (0xc000505130) (0xc0005da780) Stream removed, broadcasting: 3\nI0320 21:30:38.067254 724 log.go:172] (0xc000505130) (0xc00067ff40) Stream removed, broadcasting: 5\n" Mar 20 21:30:38.071: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 20 21:30:38.071: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 20 21:30:38.071: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 20 21:31:08.091: INFO: Deleting all statefulset in ns statefulset-7237 Mar 20 21:31:08.092: INFO: Scaling statefulset ss to 0 Mar 20 21:31:08.099: INFO: Waiting for statefulset status.replicas updated to 0 Mar 20 21:31:08.101: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:31:08.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7237" for this suite. • [SLOW TEST:92.581 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":80,"skipped":1204,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:31:08.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:31:08.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2518" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":81,"skipped":1218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:31:08.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 20 21:31:08.430: INFO: Waiting up to 5m0s for pod "pod-41a788ec-c081-458c-a0c9-59251ef942b5" in namespace "emptydir-8093" to be "success or failure" Mar 20 21:31:08.435: INFO: Pod "pod-41a788ec-c081-458c-a0c9-59251ef942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162552ms Mar 20 21:31:10.477: INFO: Pod "pod-41a788ec-c081-458c-a0c9-59251ef942b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046539699s Mar 20 21:31:12.480: INFO: Pod "pod-41a788ec-c081-458c-a0c9-59251ef942b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050090153s STEP: Saw pod success Mar 20 21:31:12.481: INFO: Pod "pod-41a788ec-c081-458c-a0c9-59251ef942b5" satisfied condition "success or failure" Mar 20 21:31:12.483: INFO: Trying to get logs from node jerma-worker pod pod-41a788ec-c081-458c-a0c9-59251ef942b5 container test-container: STEP: delete the pod Mar 20 21:31:12.599: INFO: Waiting for pod pod-41a788ec-c081-458c-a0c9-59251ef942b5 to disappear Mar 20 21:31:12.621: INFO: Pod pod-41a788ec-c081-458c-a0c9-59251ef942b5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:31:12.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8093" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1266,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:31:12.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 20 21:31:12.683: INFO: Waiting up to 5m0s for pod "pod-3431b158-097d-47b5-9b28-b431ad128e90" in namespace "emptydir-1975" to be "success or failure" Mar 20 21:31:12.693: INFO: Pod "pod-3431b158-097d-47b5-9b28-b431ad128e90": Phase="Pending", Reason="", readiness=false. Elapsed: 9.827588ms Mar 20 21:31:14.697: INFO: Pod "pod-3431b158-097d-47b5-9b28-b431ad128e90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013240852s Mar 20 21:31:16.710: INFO: Pod "pod-3431b158-097d-47b5-9b28-b431ad128e90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02694838s STEP: Saw pod success Mar 20 21:31:16.710: INFO: Pod "pod-3431b158-097d-47b5-9b28-b431ad128e90" satisfied condition "success or failure" Mar 20 21:31:16.713: INFO: Trying to get logs from node jerma-worker pod pod-3431b158-097d-47b5-9b28-b431ad128e90 container test-container: STEP: delete the pod Mar 20 21:31:16.747: INFO: Waiting for pod pod-3431b158-097d-47b5-9b28-b431ad128e90 to disappear Mar 20 21:31:16.759: INFO: Pod pod-3431b158-097d-47b5-9b28-b431ad128e90 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:31:16.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1975" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1269,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:31:16.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 20 21:31:21.890: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:31:21.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7639" for this suite. • [SLOW TEST:5.245 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":84,"skipped":1290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:31:22.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 20 21:31:22.088: INFO: >>> kubeConfig: /root/.kube/config Mar 20 21:31:24.543: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:31:35.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4899" for this suite. • [SLOW TEST:13.066 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":85,"skipped":1327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:31:35.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 21:31:35.167: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a037d20-124e-48a2-8dae-9c3753cfc161" in namespace "downward-api-5709" to be "success or failure" Mar 20 21:31:35.173: INFO: Pod "downwardapi-volume-0a037d20-124e-48a2-8dae-9c3753cfc161": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450265ms Mar 20 21:31:37.196: INFO: Pod "downwardapi-volume-0a037d20-124e-48a2-8dae-9c3753cfc161": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029340979s Mar 20 21:31:39.201: INFO: Pod "downwardapi-volume-0a037d20-124e-48a2-8dae-9c3753cfc161": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034113019s STEP: Saw pod success Mar 20 21:31:39.201: INFO: Pod "downwardapi-volume-0a037d20-124e-48a2-8dae-9c3753cfc161" satisfied condition "success or failure" Mar 20 21:31:39.205: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0a037d20-124e-48a2-8dae-9c3753cfc161 container client-container: STEP: delete the pod Mar 20 21:31:39.265: INFO: Waiting for pod downwardapi-volume-0a037d20-124e-48a2-8dae-9c3753cfc161 to disappear Mar 20 21:31:39.281: INFO: Pod downwardapi-volume-0a037d20-124e-48a2-8dae-9c3753cfc161 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:31:39.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5709" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:31:39.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 20 21:31:39.332: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 20 21:31:39.376: INFO: Waiting for terminating namespaces to be deleted... Mar 20 21:31:39.378: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 20 21:31:39.383: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:31:39.383: INFO: Container kindnet-cni ready: true, restart count 0 Mar 20 21:31:39.383: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:31:39.383: INFO: Container kube-proxy ready: true, restart count 0 Mar 20 21:31:39.383: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 20 21:31:39.402: INFO: pod-adoption-release from replicaset-7639 started at 2020-03-20 21:31:16 +0000 UTC (1 container statuses recorded) Mar 20 21:31:39.402: INFO: Container pod-adoption-release ready: false, restart count 0 Mar 20 21:31:39.402: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:31:39.402: INFO: Container kindnet-cni ready: true, restart count 0 Mar 20 21:31:39.402: INFO: pod-adoption-release-n2hdz from replicaset-7639 started at 2020-03-20 21:31:21 +0000 UTC (1 container statuses recorded) Mar 20 21:31:39.402: INFO: Container pod-adoption-release ready: false, restart count 0 Mar 20 21:31:39.402: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:31:39.402: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Mar 20 21:31:39.465: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Mar 20 21:31:39.465: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Mar 20 21:31:39.465: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Mar 20 21:31:39.465: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 Mar 20 21:31:39.465: INFO: Pod pod-adoption-release requesting resource cpu=0m on Node jerma-worker2 Mar 20 21:31:39.465: INFO: Pod pod-adoption-release-n2hdz requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 20 21:31:39.465: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Mar 20 21:31:39.521: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-20237acf-b191-4a98-ac4c-0d76d52f4f6d.15fe208ad4af3bed], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8279/filler-pod-20237acf-b191-4a98-ac4c-0d76d52f4f6d to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-20237acf-b191-4a98-ac4c-0d76d52f4f6d.15fe208b1b83f3ca], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-20237acf-b191-4a98-ac4c-0d76d52f4f6d.15fe208b47eeb0ce], Reason = [Created], Message = [Created container filler-pod-20237acf-b191-4a98-ac4c-0d76d52f4f6d] STEP: Considering event: Type = [Normal], Name = [filler-pod-20237acf-b191-4a98-ac4c-0d76d52f4f6d.15fe208b6403c516], Reason = [Started], Message = [Started container filler-pod-20237acf-b191-4a98-ac4c-0d76d52f4f6d] STEP: Considering event: Type = [Normal], Name = [filler-pod-be2f91bd-059d-4101-abe6-aec6b07c44fd.15fe208ad69c3a23], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8279/filler-pod-be2f91bd-059d-4101-abe6-aec6b07c44fd to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-be2f91bd-059d-4101-abe6-aec6b07c44fd.15fe208b4a1bd86d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-be2f91bd-059d-4101-abe6-aec6b07c44fd.15fe208b71d5fa33], Reason = [Created], Message = [Created container filler-pod-be2f91bd-059d-4101-abe6-aec6b07c44fd] STEP: Considering event: Type = [Normal], Name = [filler-pod-be2f91bd-059d-4101-abe6-aec6b07c44fd.15fe208b7f96a606], Reason = [Started], Message = [Started container filler-pod-be2f91bd-059d-4101-abe6-aec6b07c44fd] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fe208bc6154db9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:31:44.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8279" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.364 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":87,"skipped":1393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:31:44.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7762, will wait for the garbage collector to delete the pods Mar 20 21:31:48.805: INFO: Deleting Job.batch foo took: 5.696902ms Mar 20 21:31:49.105: INFO: Terminating Job.batch foo pods took: 300.247284ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:32:29.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7762" for this suite. • [SLOW TEST:44.867 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":88,"skipped":1419,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:32:29.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 20 21:32:30.030: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 20 21:32:32.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336750, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336750, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336750, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336750, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 21:32:35.072: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:32:35.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:32:36.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-53" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.053 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":89,"skipped":1439,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:32:36.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 20 21:32:36.614: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 20 21:32:37.282: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 20 21:32:39.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336757, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336757, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336757, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336757, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:32:42.256: INFO: Waited 642.650201ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:32:42.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9180" for this suite. • [SLOW TEST:6.217 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":90,"skipped":1446,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:32:42.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 20 21:32:43.050: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 20 21:32:53.377: INFO: >>> kubeConfig: /root/.kube/config Mar 20 21:32:56.271: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:33:06.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2507" for this suite. • [SLOW TEST:23.954 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":91,"skipped":1456,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:33:06.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 20 21:33:06.828: INFO: Waiting up to 5m0s for pod "downward-api-442ba88c-2154-437a-8d9d-7c0811cc4e69" in namespace "downward-api-6202" to be "success or failure" Mar 20 21:33:06.840: INFO: Pod "downward-api-442ba88c-2154-437a-8d9d-7c0811cc4e69": Phase="Pending", Reason="", readiness=false. Elapsed: 11.668754ms Mar 20 21:33:08.844: INFO: Pod "downward-api-442ba88c-2154-437a-8d9d-7c0811cc4e69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015625418s Mar 20 21:33:10.848: INFO: Pod "downward-api-442ba88c-2154-437a-8d9d-7c0811cc4e69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020122425s STEP: Saw pod success Mar 20 21:33:10.848: INFO: Pod "downward-api-442ba88c-2154-437a-8d9d-7c0811cc4e69" satisfied condition "success or failure" Mar 20 21:33:10.851: INFO: Trying to get logs from node jerma-worker2 pod downward-api-442ba88c-2154-437a-8d9d-7c0811cc4e69 container dapi-container: STEP: delete the pod Mar 20 21:33:10.917: INFO: Waiting for pod downward-api-442ba88c-2154-437a-8d9d-7c0811cc4e69 to disappear Mar 20 21:33:10.930: INFO: Pod downward-api-442ba88c-2154-437a-8d9d-7c0811cc4e69 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:33:10.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6202" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:33:10.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-67051f9e-f44a-4413-892e-7c3ee098be3e STEP: Creating a pod to test consume secrets Mar 20 21:33:11.058: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-882f4ae1-3601-47dc-9092-4282257763d4" in namespace "projected-6910" to be "success or failure" Mar 20 21:33:11.062: INFO: Pod "pod-projected-secrets-882f4ae1-3601-47dc-9092-4282257763d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.388676ms Mar 20 21:33:13.066: INFO: Pod "pod-projected-secrets-882f4ae1-3601-47dc-9092-4282257763d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008512692s Mar 20 21:33:15.070: INFO: Pod "pod-projected-secrets-882f4ae1-3601-47dc-9092-4282257763d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012652062s STEP: Saw pod success Mar 20 21:33:15.070: INFO: Pod "pod-projected-secrets-882f4ae1-3601-47dc-9092-4282257763d4" satisfied condition "success or failure" Mar 20 21:33:15.074: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-882f4ae1-3601-47dc-9092-4282257763d4 container projected-secret-volume-test: STEP: delete the pod Mar 20 21:33:15.115: INFO: Waiting for pod pod-projected-secrets-882f4ae1-3601-47dc-9092-4282257763d4 to disappear Mar 20 21:33:15.126: INFO: Pod pod-projected-secrets-882f4ae1-3601-47dc-9092-4282257763d4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:33:15.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6910" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1488,"failed":0} ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:33:15.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 20 21:33:19.727: INFO: Successfully updated pod "labelsupdate30c53547-b42d-4880-b1c6-2fe075e0720e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:33:21.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-346" for this suite. • [SLOW TEST:6.635 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1488,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:33:21.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 20 21:33:25.923: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:33:25.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6873" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1500,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:33:25.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 20 21:33:26.053: INFO: Waiting up to 5m0s for pod "pod-750b375e-a10e-46b6-a73c-54825cc3a8a0" in namespace "emptydir-9339" to be "success or failure" Mar 20 21:33:26.068: INFO: Pod "pod-750b375e-a10e-46b6-a73c-54825cc3a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.005339ms Mar 20 21:33:28.072: INFO: Pod "pod-750b375e-a10e-46b6-a73c-54825cc3a8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018947799s Mar 20 21:33:30.076: INFO: Pod "pod-750b375e-a10e-46b6-a73c-54825cc3a8a0": Phase="Running", Reason="", readiness=true. Elapsed: 4.022967934s Mar 20 21:33:32.081: INFO: Pod "pod-750b375e-a10e-46b6-a73c-54825cc3a8a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027809192s STEP: Saw pod success Mar 20 21:33:32.081: INFO: Pod "pod-750b375e-a10e-46b6-a73c-54825cc3a8a0" satisfied condition "success or failure" Mar 20 21:33:32.084: INFO: Trying to get logs from node jerma-worker2 pod pod-750b375e-a10e-46b6-a73c-54825cc3a8a0 container test-container: STEP: delete the pod Mar 20 21:33:32.106: INFO: Waiting for pod pod-750b375e-a10e-46b6-a73c-54825cc3a8a0 to disappear Mar 20 21:33:32.110: INFO: Pod pod-750b375e-a10e-46b6-a73c-54825cc3a8a0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:33:32.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9339" for this suite. • [SLOW TEST:6.165 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1514,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:33:32.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:33:32.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7250' Mar 20 21:33:32.501: INFO: stderr: "" Mar 20 21:33:32.501: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 20 21:33:32.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7250' Mar 20 21:33:32.748: INFO: stderr: "" Mar 20 21:33:32.748: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 20 21:33:33.753: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 21:33:33.753: INFO: Found 0 / 1 Mar 20 21:33:34.752: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 21:33:34.752: INFO: Found 0 / 1 Mar 20 21:33:35.752: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 21:33:35.752: INFO: Found 1 / 1 Mar 20 21:33:35.752: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 20 21:33:35.756: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 21:33:35.756: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 20 21:33:35.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-9q472 --namespace=kubectl-7250' Mar 20 21:33:35.862: INFO: stderr: "" Mar 20 21:33:35.862: INFO: stdout: "Name: agnhost-master-9q472\nNamespace: kubectl-7250\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Fri, 20 Mar 2020 21:33:32 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.233\nIPs:\n IP: 10.244.2.233\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://f6a8efd887c12e8559b8e93b5559cbb917d19447f0eb2a604f6cb853155495fc\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 20 Mar 2020 21:33:34 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-qv8cq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-qv8cq:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-qv8cq\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-7250/agnhost-master-9q472 to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Mar 20 21:33:35.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7250' Mar 20 21:33:35.974: INFO: stderr: "" Mar 20 21:33:35.974: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7250\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-9q472\n" Mar 20 21:33:35.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7250' Mar 20 21:33:36.073: INFO: stderr: "" Mar 20 21:33:36.073: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7250\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.100.231.92\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.233:6379\nSession Affinity: None\nEvents: \n" Mar 20 21:33:36.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Mar 20 21:33:36.186: INFO: stderr: "" Mar 20 21:33:36.186: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Fri, 20 Mar 2020 21:33:34 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 20 Mar 2020 21:29:45 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 20 Mar 2020 21:29:45 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 20 Mar 2020 21:29:45 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 20 Mar 2020 21:29:45 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d3h\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d3h\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d3h\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 5d3h\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 5d3h\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 5d3h\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d3h\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 5d3h\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d3h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 20 21:33:36.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7250' Mar 20 21:33:36.286: INFO: stderr: "" Mar 20 21:33:36.286: INFO: stdout: "Name: kubectl-7250\nLabels: e2e-framework=kubectl\n e2e-run=4059d5cd-aab2-469a-a920-f1d32f0e9d4f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:33:36.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7250" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":97,"skipped":1530,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:33:36.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:33:36.392: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5488 I0320 21:33:36.423367 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5488, replica count: 1 I0320 21:33:37.473849 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0320 21:33:38.474066 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0320 21:33:39.474301 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0320 21:33:40.474572 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 20 21:33:40.607: INFO: Created: latency-svc-gf6sc Mar 20 21:33:40.622: INFO: Got endpoints: latency-svc-gf6sc [47.227612ms] Mar 20 21:33:40.683: INFO: Created: latency-svc-62v58 Mar 20 21:33:40.687: INFO: Got endpoints: latency-svc-62v58 [64.505834ms] Mar 20 21:33:40.713: INFO: Created: latency-svc-mt6fm Mar 20 21:33:40.729: INFO: Got endpoints: latency-svc-mt6fm [106.623124ms] Mar 20 21:33:40.749: INFO: Created: latency-svc-lqbdl Mar 20 21:33:40.766: INFO: Got endpoints: latency-svc-lqbdl [143.424833ms] Mar 20 21:33:40.821: INFO: Created: latency-svc-xnc2s Mar 20 21:33:40.824: INFO: Got endpoints: latency-svc-xnc2s [202.220194ms] Mar 20 21:33:40.852: INFO: Created: latency-svc-vmcdr Mar 20 21:33:40.866: INFO: Got endpoints: latency-svc-vmcdr [244.636838ms] Mar 20 21:33:40.882: INFO: Created: latency-svc-pdh8m Mar 20 21:33:40.897: INFO: Got endpoints: latency-svc-pdh8m [274.667148ms] Mar 20 21:33:40.919: INFO: Created: latency-svc-sbm8z Mar 20 21:33:40.946: INFO: Got endpoints: latency-svc-sbm8z [323.790658ms] Mar 20 21:33:40.971: INFO: Created: latency-svc-7cjmg Mar 20 21:33:40.999: INFO: Got endpoints: latency-svc-7cjmg [376.290334ms] Mar 20 21:33:41.019: INFO: Created: latency-svc-lcg8g Mar 20 21:33:41.035: INFO: Got endpoints: latency-svc-lcg8g [413.002785ms] Mar 20 21:33:41.096: INFO: Created: latency-svc-sxj2c Mar 20 21:33:41.103: INFO: Got endpoints: latency-svc-sxj2c [481.025032ms] Mar 20 21:33:41.129: INFO: Created: latency-svc-69pch Mar 20 21:33:41.139: INFO: Got endpoints: latency-svc-69pch [517.000853ms] Mar 20 21:33:41.193: INFO: Created: latency-svc-4ddrv Mar 20 21:33:41.228: INFO: Got endpoints: latency-svc-4ddrv [604.992937ms] Mar 20 21:33:41.242: INFO: Created: latency-svc-bg8k8 Mar 20 21:33:41.266: INFO: Got endpoints: latency-svc-bg8k8 [644.48557ms] Mar 20 21:33:41.297: INFO: Created: latency-svc-jrn7h Mar 20 21:33:41.308: INFO: Got endpoints: latency-svc-jrn7h [685.319828ms] Mar 20 21:33:41.326: INFO: Created: latency-svc-6tpwc Mar 20 21:33:41.365: INFO: Got endpoints: latency-svc-6tpwc [743.024715ms] Mar 20 21:33:41.378: INFO: Created: latency-svc-lmwl2 Mar 20 21:33:41.392: INFO: Got endpoints: latency-svc-lmwl2 [705.256856ms] Mar 20 21:33:41.421: INFO: Created: latency-svc-twk2r Mar 20 21:33:41.435: INFO: Got endpoints: latency-svc-twk2r [705.313446ms] Mar 20 21:33:41.457: INFO: Created: latency-svc-thsc4 Mar 20 21:33:41.508: INFO: Got endpoints: latency-svc-thsc4 [742.714955ms] Mar 20 21:33:41.807: INFO: Created: latency-svc-pgbtw Mar 20 21:33:41.835: INFO: Got endpoints: latency-svc-pgbtw [1.010683876s] Mar 20 21:33:41.836: INFO: Created: latency-svc-7q9hh Mar 20 21:33:41.860: INFO: Got endpoints: latency-svc-7q9hh [993.396668ms] Mar 20 21:33:41.890: INFO: Created: latency-svc-kslpj Mar 20 21:33:41.952: INFO: Got endpoints: latency-svc-kslpj [1.054986067s] Mar 20 21:33:41.954: INFO: Created: latency-svc-88b26 Mar 20 21:33:41.963: INFO: Got endpoints: latency-svc-88b26 [1.017178922s] Mar 20 21:33:41.990: INFO: Created: latency-svc-b66bg Mar 20 21:33:42.006: INFO: Got endpoints: latency-svc-b66bg [1.007202702s] Mar 20 21:33:42.032: INFO: Created: latency-svc-js8cv Mar 20 21:33:42.047: INFO: Got endpoints: latency-svc-js8cv [1.011179846s] Mar 20 21:33:42.102: INFO: Created: latency-svc-hpdfm Mar 20 21:33:42.106: INFO: Got endpoints: latency-svc-hpdfm [1.003480431s] Mar 20 21:33:42.136: INFO: Created: latency-svc-8xl7n Mar 20 21:33:42.149: INFO: Got endpoints: latency-svc-8xl7n [1.010205707s] Mar 20 21:33:42.170: INFO: Created: latency-svc-zvxc9 Mar 20 21:33:42.194: INFO: Got endpoints: latency-svc-zvxc9 [966.734562ms] Mar 20 21:33:42.267: INFO: Created: latency-svc-9f5jc Mar 20 21:33:42.275: INFO: Got endpoints: latency-svc-9f5jc [1.008829008s] Mar 20 21:33:42.304: INFO: Created: latency-svc-ss8fv Mar 20 21:33:42.342: INFO: Got endpoints: latency-svc-ss8fv [1.034339894s] Mar 20 21:33:42.410: INFO: Created: latency-svc-ldkgh Mar 20 21:33:42.432: INFO: Got endpoints: latency-svc-ldkgh [1.066647008s] Mar 20 21:33:42.483: INFO: Created: latency-svc-8hd48 Mar 20 21:33:42.515: INFO: Got endpoints: latency-svc-8hd48 [1.123163498s] Mar 20 21:33:42.562: INFO: Created: latency-svc-9v7c8 Mar 20 21:33:42.576: INFO: Got endpoints: latency-svc-9v7c8 [1.141630385s] Mar 20 21:33:42.604: INFO: Created: latency-svc-w5qzn Mar 20 21:33:42.798: INFO: Got endpoints: latency-svc-w5qzn [1.289095014s] Mar 20 21:33:42.953: INFO: Created: latency-svc-jrj5p Mar 20 21:33:43.414: INFO: Created: latency-svc-9w6dz Mar 20 21:33:43.415: INFO: Got endpoints: latency-svc-jrj5p [1.579835173s] Mar 20 21:33:43.503: INFO: Got endpoints: latency-svc-9w6dz [1.642871611s] Mar 20 21:33:43.630: INFO: Created: latency-svc-2d6qd Mar 20 21:33:43.644: INFO: Got endpoints: latency-svc-2d6qd [1.691645729s] Mar 20 21:33:43.666: INFO: Created: latency-svc-bvrjq Mar 20 21:33:43.680: INFO: Got endpoints: latency-svc-bvrjq [1.716480386s] Mar 20 21:33:43.700: INFO: Created: latency-svc-tbxwj Mar 20 21:33:43.716: INFO: Got endpoints: latency-svc-tbxwj [1.710256228s] Mar 20 21:33:43.773: INFO: Created: latency-svc-wxf7p Mar 20 21:33:43.783: INFO: Got endpoints: latency-svc-wxf7p [1.735810398s] Mar 20 21:33:43.810: INFO: Created: latency-svc-h785w Mar 20 21:33:43.825: INFO: Got endpoints: latency-svc-h785w [1.718704793s] Mar 20 21:33:43.846: INFO: Created: latency-svc-ppv2c Mar 20 21:33:43.861: INFO: Got endpoints: latency-svc-ppv2c [1.711987811s] Mar 20 21:33:43.916: INFO: Created: latency-svc-r4bqb Mar 20 21:33:43.928: INFO: Got endpoints: latency-svc-r4bqb [1.73373385s] Mar 20 21:33:43.953: INFO: Created: latency-svc-frs5p Mar 20 21:33:43.963: INFO: Got endpoints: latency-svc-frs5p [1.688028213s] Mar 20 21:33:43.984: INFO: Created: latency-svc-7cxk7 Mar 20 21:33:44.001: INFO: Got endpoints: latency-svc-7cxk7 [1.658838194s] Mar 20 21:33:44.055: INFO: Created: latency-svc-nt79q Mar 20 21:33:44.060: INFO: Got endpoints: latency-svc-nt79q [1.628114754s] Mar 20 21:33:44.090: INFO: Created: latency-svc-nqk4c Mar 20 21:33:44.103: INFO: Got endpoints: latency-svc-nqk4c [1.587336415s] Mar 20 21:33:44.127: INFO: Created: latency-svc-qxlsz Mar 20 21:33:44.150: INFO: Got endpoints: latency-svc-qxlsz [1.573946028s] Mar 20 21:33:44.204: INFO: Created: latency-svc-ckbrt Mar 20 21:33:44.207: INFO: Got endpoints: latency-svc-ckbrt [1.409229348s] Mar 20 21:33:44.261: INFO: Created: latency-svc-bpdwb Mar 20 21:33:44.278: INFO: Got endpoints: latency-svc-bpdwb [863.395718ms] Mar 20 21:33:44.391: INFO: Created: latency-svc-xzlw7 Mar 20 21:33:44.404: INFO: Got endpoints: latency-svc-xzlw7 [900.925981ms] Mar 20 21:33:44.471: INFO: Created: latency-svc-wj8mr Mar 20 21:33:44.503: INFO: Got endpoints: latency-svc-wj8mr [859.590243ms] Mar 20 21:33:44.516: INFO: Created: latency-svc-dfq6k Mar 20 21:33:44.540: INFO: Got endpoints: latency-svc-dfq6k [860.178547ms] Mar 20 21:33:44.571: INFO: Created: latency-svc-xpf92 Mar 20 21:33:44.596: INFO: Got endpoints: latency-svc-xpf92 [879.347427ms] Mar 20 21:33:44.647: INFO: Created: latency-svc-vw4l5 Mar 20 21:33:44.650: INFO: Got endpoints: latency-svc-vw4l5 [867.866259ms] Mar 20 21:33:44.678: INFO: Created: latency-svc-4f2nh Mar 20 21:33:44.693: INFO: Got endpoints: latency-svc-4f2nh [868.055209ms] Mar 20 21:33:44.785: INFO: Created: latency-svc-7rhqp Mar 20 21:33:44.789: INFO: Got endpoints: latency-svc-7rhqp [927.828853ms] Mar 20 21:33:44.810: INFO: Created: latency-svc-2prc2 Mar 20 21:33:44.826: INFO: Got endpoints: latency-svc-2prc2 [897.519017ms] Mar 20 21:33:44.846: INFO: Created: latency-svc-m8vbk Mar 20 21:33:44.856: INFO: Got endpoints: latency-svc-m8vbk [892.473725ms] Mar 20 21:33:44.876: INFO: Created: latency-svc-47fk2 Mar 20 21:33:44.904: INFO: Got endpoints: latency-svc-47fk2 [903.003742ms] Mar 20 21:33:44.956: INFO: Created: latency-svc-vrffs Mar 20 21:33:44.971: INFO: Got endpoints: latency-svc-vrffs [910.601071ms] Mar 20 21:33:45.043: INFO: Created: latency-svc-whsdm Mar 20 21:33:45.048: INFO: Got endpoints: latency-svc-whsdm [945.435265ms] Mar 20 21:33:45.069: INFO: Created: latency-svc-lq97q Mar 20 21:33:45.079: INFO: Got endpoints: latency-svc-lq97q [928.56079ms] Mar 20 21:33:45.111: INFO: Created: latency-svc-mtdb2 Mar 20 21:33:45.137: INFO: Got endpoints: latency-svc-mtdb2 [930.351549ms] Mar 20 21:33:45.180: INFO: Created: latency-svc-fswmb Mar 20 21:33:45.193: INFO: Got endpoints: latency-svc-fswmb [915.171527ms] Mar 20 21:33:45.214: INFO: Created: latency-svc-sd8n5 Mar 20 21:33:45.230: INFO: Got endpoints: latency-svc-sd8n5 [825.884942ms] Mar 20 21:33:45.248: INFO: Created: latency-svc-4s7fv Mar 20 21:33:45.260: INFO: Got endpoints: latency-svc-4s7fv [756.687396ms] Mar 20 21:33:45.278: INFO: Created: latency-svc-gc8jb Mar 20 21:33:45.311: INFO: Got endpoints: latency-svc-gc8jb [770.861013ms] Mar 20 21:33:45.322: INFO: Created: latency-svc-zsm49 Mar 20 21:33:45.339: INFO: Got endpoints: latency-svc-zsm49 [743.01857ms] Mar 20 21:33:45.358: INFO: Created: latency-svc-247xr Mar 20 21:33:45.375: INFO: Got endpoints: latency-svc-247xr [724.549284ms] Mar 20 21:33:45.395: INFO: Created: latency-svc-hp4lh Mar 20 21:33:45.449: INFO: Got endpoints: latency-svc-hp4lh [755.66268ms] Mar 20 21:33:45.471: INFO: Created: latency-svc-xvpq4 Mar 20 21:33:45.484: INFO: Got endpoints: latency-svc-xvpq4 [694.49334ms] Mar 20 21:33:45.507: INFO: Created: latency-svc-f64zb Mar 20 21:33:45.520: INFO: Got endpoints: latency-svc-f64zb [694.04955ms] Mar 20 21:33:45.538: INFO: Created: latency-svc-x4z8k Mar 20 21:33:45.569: INFO: Got endpoints: latency-svc-x4z8k [712.860551ms] Mar 20 21:33:45.598: INFO: Created: latency-svc-lv4fs Mar 20 21:33:45.632: INFO: Got endpoints: latency-svc-lv4fs [728.067194ms] Mar 20 21:33:45.714: INFO: Created: latency-svc-x7jg4 Mar 20 21:33:45.736: INFO: Got endpoints: latency-svc-x7jg4 [764.698016ms] Mar 20 21:33:45.767: INFO: Created: latency-svc-8sk9k Mar 20 21:33:45.779: INFO: Got endpoints: latency-svc-8sk9k [731.236593ms] Mar 20 21:33:45.802: INFO: Created: latency-svc-xvc7k Mar 20 21:33:45.869: INFO: Got endpoints: latency-svc-xvc7k [789.518799ms] Mar 20 21:33:45.872: INFO: Created: latency-svc-nz9cg Mar 20 21:33:45.888: INFO: Got endpoints: latency-svc-nz9cg [750.348946ms] Mar 20 21:33:45.908: INFO: Created: latency-svc-26jx6 Mar 20 21:33:45.918: INFO: Got endpoints: latency-svc-26jx6 [724.233429ms] Mar 20 21:33:45.947: INFO: Created: latency-svc-cqvk5 Mar 20 21:33:45.960: INFO: Got endpoints: latency-svc-cqvk5 [729.716379ms] Mar 20 21:33:46.012: INFO: Created: latency-svc-6nwmd Mar 20 21:33:46.016: INFO: Got endpoints: latency-svc-6nwmd [755.353778ms] Mar 20 21:33:46.089: INFO: Created: latency-svc-b2gll Mar 20 21:33:46.150: INFO: Got endpoints: latency-svc-b2gll [839.224616ms] Mar 20 21:33:46.199: INFO: Created: latency-svc-zkv22 Mar 20 21:33:46.219: INFO: Got endpoints: latency-svc-zkv22 [879.852024ms] Mar 20 21:33:46.287: INFO: Created: latency-svc-bmvt2 Mar 20 21:33:46.291: INFO: Got endpoints: latency-svc-bmvt2 [915.82722ms] Mar 20 21:33:46.329: INFO: Created: latency-svc-xghtb Mar 20 21:33:46.340: INFO: Got endpoints: latency-svc-xghtb [890.809239ms] Mar 20 21:33:46.365: INFO: Created: latency-svc-zbx52 Mar 20 21:33:46.376: INFO: Got endpoints: latency-svc-zbx52 [892.538945ms] Mar 20 21:33:46.419: INFO: Created: latency-svc-m5shh Mar 20 21:33:46.444: INFO: Got endpoints: latency-svc-m5shh [923.920281ms] Mar 20 21:33:46.444: INFO: Created: latency-svc-pfpzz Mar 20 21:33:46.455: INFO: Got endpoints: latency-svc-pfpzz [886.308137ms] Mar 20 21:33:46.496: INFO: Created: latency-svc-zpqms Mar 20 21:33:46.516: INFO: Got endpoints: latency-svc-zpqms [883.580113ms] Mar 20 21:33:46.557: INFO: Created: latency-svc-754wn Mar 20 21:33:46.564: INFO: Got endpoints: latency-svc-754wn [827.976588ms] Mar 20 21:33:46.595: INFO: Created: latency-svc-fczsj Mar 20 21:33:46.618: INFO: Got endpoints: latency-svc-fczsj [838.692668ms] Mar 20 21:33:46.648: INFO: Created: latency-svc-csgnv Mar 20 21:33:46.719: INFO: Got endpoints: latency-svc-csgnv [850.359803ms] Mar 20 21:33:46.721: INFO: Created: latency-svc-mg9lw Mar 20 21:33:46.726: INFO: Got endpoints: latency-svc-mg9lw [838.27619ms] Mar 20 21:33:46.748: INFO: Created: latency-svc-sb4wt Mar 20 21:33:46.763: INFO: Got endpoints: latency-svc-sb4wt [844.993034ms] Mar 20 21:33:46.784: INFO: Created: latency-svc-d5wqg Mar 20 21:33:46.799: INFO: Got endpoints: latency-svc-d5wqg [839.408738ms] Mar 20 21:33:46.816: INFO: Created: latency-svc-wkzfw Mar 20 21:33:46.874: INFO: Got endpoints: latency-svc-wkzfw [858.658471ms] Mar 20 21:33:46.877: INFO: Created: latency-svc-hq664 Mar 20 21:33:46.890: INFO: Got endpoints: latency-svc-hq664 [739.288312ms] Mar 20 21:33:46.910: INFO: Created: latency-svc-pcznv Mar 20 21:33:46.946: INFO: Got endpoints: latency-svc-pcznv [727.648378ms] Mar 20 21:33:47.019: INFO: Created: latency-svc-sd8s8 Mar 20 21:33:47.050: INFO: Got endpoints: latency-svc-sd8s8 [758.825188ms] Mar 20 21:33:47.050: INFO: Created: latency-svc-hf5j6 Mar 20 21:33:47.064: INFO: Got endpoints: latency-svc-hf5j6 [724.249422ms] Mar 20 21:33:47.086: INFO: Created: latency-svc-krwm5 Mar 20 21:33:47.101: INFO: Got endpoints: latency-svc-krwm5 [724.183621ms] Mar 20 21:33:47.162: INFO: Created: latency-svc-pnxqk Mar 20 21:33:47.164: INFO: Got endpoints: latency-svc-pnxqk [720.300037ms] Mar 20 21:33:47.198: INFO: Created: latency-svc-6ctng Mar 20 21:33:47.227: INFO: Got endpoints: latency-svc-6ctng [771.769546ms] Mar 20 21:33:47.248: INFO: Created: latency-svc-gl984 Mar 20 21:33:47.294: INFO: Got endpoints: latency-svc-gl984 [777.919025ms] Mar 20 21:33:47.296: INFO: Created: latency-svc-fbmnl Mar 20 21:33:47.312: INFO: Got endpoints: latency-svc-fbmnl [748.005436ms] Mar 20 21:33:47.332: INFO: Created: latency-svc-28g9m Mar 20 21:33:47.354: INFO: Got endpoints: latency-svc-28g9m [736.063056ms] Mar 20 21:33:47.384: INFO: Created: latency-svc-jb2bm Mar 20 21:33:47.449: INFO: Got endpoints: latency-svc-jb2bm [730.317227ms] Mar 20 21:33:47.451: INFO: Created: latency-svc-6w6tf Mar 20 21:33:47.457: INFO: Got endpoints: latency-svc-6w6tf [731.326086ms] Mar 20 21:33:47.476: INFO: Created: latency-svc-8mgpv Mar 20 21:33:47.487: INFO: Got endpoints: latency-svc-8mgpv [724.286766ms] Mar 20 21:33:47.504: INFO: Created: latency-svc-p4ph8 Mar 20 21:33:47.517: INFO: Got endpoints: latency-svc-p4ph8 [718.12929ms] Mar 20 21:33:47.611: INFO: Created: latency-svc-rhh5n Mar 20 21:33:47.614: INFO: Got endpoints: latency-svc-rhh5n [739.135584ms] Mar 20 21:33:47.638: INFO: Created: latency-svc-46lfz Mar 20 21:33:47.650: INFO: Got endpoints: latency-svc-46lfz [759.845621ms] Mar 20 21:33:47.668: INFO: Created: latency-svc-xr9wj Mar 20 21:33:47.696: INFO: Got endpoints: latency-svc-xr9wj [749.346617ms] Mar 20 21:33:47.767: INFO: Created: latency-svc-wxrx9 Mar 20 21:33:47.789: INFO: Got endpoints: latency-svc-wxrx9 [738.67677ms] Mar 20 21:33:47.819: INFO: Created: latency-svc-wwrlv Mar 20 21:33:47.830: INFO: Got endpoints: latency-svc-wwrlv [766.223767ms] Mar 20 21:33:47.848: INFO: Created: latency-svc-j6bfs Mar 20 21:33:47.861: INFO: Got endpoints: latency-svc-j6bfs [759.927962ms] Mar 20 21:33:47.911: INFO: Created: latency-svc-hjpkj Mar 20 21:33:47.915: INFO: Got endpoints: latency-svc-hjpkj [750.319653ms] Mar 20 21:33:47.936: INFO: Created: latency-svc-hpfws Mar 20 21:33:47.951: INFO: Got endpoints: latency-svc-hpfws [723.855734ms] Mar 20 21:33:47.972: INFO: Created: latency-svc-5kcsf Mar 20 21:33:47.988: INFO: Got endpoints: latency-svc-5kcsf [693.85283ms] Mar 20 21:33:48.004: INFO: Created: latency-svc-72g2x Mar 20 21:33:48.048: INFO: Got endpoints: latency-svc-72g2x [735.872427ms] Mar 20 21:33:48.057: INFO: Created: latency-svc-mmdj6 Mar 20 21:33:48.072: INFO: Got endpoints: latency-svc-mmdj6 [717.317854ms] Mar 20 21:33:48.100: INFO: Created: latency-svc-grhz2 Mar 20 21:33:48.108: INFO: Got endpoints: latency-svc-grhz2 [658.757193ms] Mar 20 21:33:48.134: INFO: Created: latency-svc-mjbcr Mar 20 21:33:48.191: INFO: Got endpoints: latency-svc-mjbcr [733.937394ms] Mar 20 21:33:48.232: INFO: Created: latency-svc-6d4qx Mar 20 21:33:48.246: INFO: Got endpoints: latency-svc-6d4qx [759.433487ms] Mar 20 21:33:48.330: INFO: Created: latency-svc-8bghg Mar 20 21:33:48.333: INFO: Got endpoints: latency-svc-8bghg [815.671106ms] Mar 20 21:33:48.404: INFO: Created: latency-svc-ndd5g Mar 20 21:33:48.480: INFO: Got endpoints: latency-svc-ndd5g [865.900333ms] Mar 20 21:33:48.481: INFO: Created: latency-svc-n2rv2 Mar 20 21:33:48.487: INFO: Got endpoints: latency-svc-n2rv2 [837.218177ms] Mar 20 21:33:48.508: INFO: Created: latency-svc-kkbdh Mar 20 21:33:48.555: INFO: Got endpoints: latency-svc-kkbdh [858.694292ms] Mar 20 21:33:48.617: INFO: Created: latency-svc-2cpwg Mar 20 21:33:48.626: INFO: Got endpoints: latency-svc-2cpwg [837.093327ms] Mar 20 21:33:48.646: INFO: Created: latency-svc-9kdx4 Mar 20 21:33:48.662: INFO: Got endpoints: latency-svc-9kdx4 [831.961997ms] Mar 20 21:33:48.682: INFO: Created: latency-svc-5w94k Mar 20 21:33:48.692: INFO: Got endpoints: latency-svc-5w94k [831.611543ms] Mar 20 21:33:48.717: INFO: Created: latency-svc-zz68p Mar 20 21:33:48.785: INFO: Got endpoints: latency-svc-zz68p [869.991652ms] Mar 20 21:33:48.790: INFO: Created: latency-svc-m5bsz Mar 20 21:33:48.801: INFO: Got endpoints: latency-svc-m5bsz [849.7896ms] Mar 20 21:33:48.820: INFO: Created: latency-svc-m5jl6 Mar 20 21:33:48.838: INFO: Got endpoints: latency-svc-m5jl6 [849.8967ms] Mar 20 21:33:48.856: INFO: Created: latency-svc-l2rnc Mar 20 21:33:48.880: INFO: Got endpoints: latency-svc-l2rnc [832.055937ms] Mar 20 21:33:48.920: INFO: Created: latency-svc-mzd46 Mar 20 21:33:48.934: INFO: Got endpoints: latency-svc-mzd46 [861.881641ms] Mar 20 21:33:48.964: INFO: Created: latency-svc-8pdvh Mar 20 21:33:48.976: INFO: Got endpoints: latency-svc-8pdvh [867.382671ms] Mar 20 21:33:48.992: INFO: Created: latency-svc-l9vvx Mar 20 21:33:49.006: INFO: Got endpoints: latency-svc-l9vvx [814.67266ms] Mar 20 21:33:49.060: INFO: Created: latency-svc-8g5gf Mar 20 21:33:49.078: INFO: Got endpoints: latency-svc-8g5gf [831.532543ms] Mar 20 21:33:49.119: INFO: Created: latency-svc-zzpv9 Mar 20 21:33:49.133: INFO: Got endpoints: latency-svc-zzpv9 [799.912859ms] Mar 20 21:33:49.154: INFO: Created: latency-svc-wjczj Mar 20 21:33:49.216: INFO: Got endpoints: latency-svc-wjczj [735.954377ms] Mar 20 21:33:49.218: INFO: Created: latency-svc-4zpvg Mar 20 21:33:49.224: INFO: Got endpoints: latency-svc-4zpvg [737.454268ms] Mar 20 21:33:49.256: INFO: Created: latency-svc-rdnf5 Mar 20 21:33:49.276: INFO: Got endpoints: latency-svc-rdnf5 [721.241463ms] Mar 20 21:33:49.300: INFO: Created: latency-svc-9klss Mar 20 21:33:49.313: INFO: Got endpoints: latency-svc-9klss [687.460119ms] Mar 20 21:33:49.377: INFO: Created: latency-svc-pjnff Mar 20 21:33:49.402: INFO: Got endpoints: latency-svc-pjnff [739.792326ms] Mar 20 21:33:49.405: INFO: Created: latency-svc-zp4sk Mar 20 21:33:49.416: INFO: Got endpoints: latency-svc-zp4sk [724.233547ms] Mar 20 21:33:49.438: INFO: Created: latency-svc-lq8rs Mar 20 21:33:49.452: INFO: Got endpoints: latency-svc-lq8rs [667.287469ms] Mar 20 21:33:49.474: INFO: Created: latency-svc-bms64 Mar 20 21:33:49.509: INFO: Got endpoints: latency-svc-bms64 [708.2514ms] Mar 20 21:33:49.520: INFO: Created: latency-svc-dr27d Mar 20 21:33:49.537: INFO: Got endpoints: latency-svc-dr27d [699.168649ms] Mar 20 21:33:49.563: INFO: Created: latency-svc-xq2zg Mar 20 21:33:49.588: INFO: Got endpoints: latency-svc-xq2zg [708.267295ms] Mar 20 21:33:49.665: INFO: Created: latency-svc-djpmd Mar 20 21:33:49.668: INFO: Got endpoints: latency-svc-djpmd [734.265003ms] Mar 20 21:33:49.696: INFO: Created: latency-svc-8wzmk Mar 20 21:33:49.718: INFO: Got endpoints: latency-svc-8wzmk [741.750404ms] Mar 20 21:33:49.749: INFO: Created: latency-svc-jnjgd Mar 20 21:33:49.760: INFO: Got endpoints: latency-svc-jnjgd [753.66022ms] Mar 20 21:33:49.833: INFO: Created: latency-svc-psmgj Mar 20 21:33:49.844: INFO: Got endpoints: latency-svc-psmgj [765.929576ms] Mar 20 21:33:49.871: INFO: Created: latency-svc-slk8z Mar 20 21:33:49.886: INFO: Got endpoints: latency-svc-slk8z [753.411082ms] Mar 20 21:33:49.940: INFO: Created: latency-svc-mlw28 Mar 20 21:33:49.944: INFO: Got endpoints: latency-svc-mlw28 [727.952053ms] Mar 20 21:33:49.964: INFO: Created: latency-svc-6cg8g Mar 20 21:33:49.989: INFO: Got endpoints: latency-svc-6cg8g [764.38154ms] Mar 20 21:33:50.018: INFO: Created: latency-svc-bbvg8 Mar 20 21:33:50.031: INFO: Got endpoints: latency-svc-bbvg8 [754.959317ms] Mar 20 21:33:50.072: INFO: Created: latency-svc-9drql Mar 20 21:33:50.098: INFO: Got endpoints: latency-svc-9drql [784.915119ms] Mar 20 21:33:50.100: INFO: Created: latency-svc-ncs7n Mar 20 21:33:50.128: INFO: Got endpoints: latency-svc-ncs7n [725.624677ms] Mar 20 21:33:50.156: INFO: Created: latency-svc-lqhqm Mar 20 21:33:50.170: INFO: Got endpoints: latency-svc-lqhqm [753.273537ms] Mar 20 21:33:50.216: INFO: Created: latency-svc-v7ld9 Mar 20 21:33:50.218: INFO: Got endpoints: latency-svc-v7ld9 [766.021977ms] Mar 20 21:33:50.254: INFO: Created: latency-svc-6l4k4 Mar 20 21:33:50.266: INFO: Got endpoints: latency-svc-6l4k4 [757.113159ms] Mar 20 21:33:50.284: INFO: Created: latency-svc-cnhbl Mar 20 21:33:50.296: INFO: Got endpoints: latency-svc-cnhbl [759.342079ms] Mar 20 21:33:50.314: INFO: Created: latency-svc-jr5xm Mar 20 21:33:50.365: INFO: Got endpoints: latency-svc-jr5xm [776.577889ms] Mar 20 21:33:50.403: INFO: Created: latency-svc-ktmbg Mar 20 21:33:50.417: INFO: Got endpoints: latency-svc-ktmbg [749.011443ms] Mar 20 21:33:50.446: INFO: Created: latency-svc-7f8vv Mar 20 21:33:50.460: INFO: Got endpoints: latency-svc-7f8vv [741.912223ms] Mar 20 21:33:50.518: INFO: Created: latency-svc-h9s6r Mar 20 21:33:50.558: INFO: Got endpoints: latency-svc-h9s6r [798.34906ms] Mar 20 21:33:50.629: INFO: Created: latency-svc-ljh8q Mar 20 21:33:50.632: INFO: Got endpoints: latency-svc-ljh8q [787.672124ms] Mar 20 21:33:50.656: INFO: Created: latency-svc-w85tm Mar 20 21:33:50.671: INFO: Got endpoints: latency-svc-w85tm [784.279851ms] Mar 20 21:33:50.692: INFO: Created: latency-svc-7ctnw Mar 20 21:33:50.722: INFO: Got endpoints: latency-svc-7ctnw [778.488554ms] Mar 20 21:33:50.773: INFO: Created: latency-svc-8pt2r Mar 20 21:33:50.776: INFO: Got endpoints: latency-svc-8pt2r [786.764414ms] Mar 20 21:33:50.798: INFO: Created: latency-svc-pvmm4 Mar 20 21:33:50.809: INFO: Got endpoints: latency-svc-pvmm4 [778.220372ms] Mar 20 21:33:50.828: INFO: Created: latency-svc-9ll7c Mar 20 21:33:50.840: INFO: Got endpoints: latency-svc-9ll7c [741.196519ms] Mar 20 21:33:50.860: INFO: Created: latency-svc-8tkbc Mar 20 21:33:50.916: INFO: Got endpoints: latency-svc-8tkbc [788.014648ms] Mar 20 21:33:50.938: INFO: Created: latency-svc-gv24g Mar 20 21:33:50.954: INFO: Got endpoints: latency-svc-gv24g [784.363494ms] Mar 20 21:33:50.972: INFO: Created: latency-svc-xbz2q Mar 20 21:33:50.984: INFO: Got endpoints: latency-svc-xbz2q [765.843499ms] Mar 20 21:33:51.002: INFO: Created: latency-svc-bffs4 Mar 20 21:33:51.015: INFO: Got endpoints: latency-svc-bffs4 [748.270838ms] Mar 20 21:33:51.072: INFO: Created: latency-svc-ch8kk Mar 20 21:33:51.075: INFO: Got endpoints: latency-svc-ch8kk [778.411373ms] Mar 20 21:33:51.106: INFO: Created: latency-svc-kbvkh Mar 20 21:33:51.123: INFO: Got endpoints: latency-svc-kbvkh [758.178397ms] Mar 20 21:33:51.140: INFO: Created: latency-svc-pzd9j Mar 20 21:33:51.153: INFO: Got endpoints: latency-svc-pzd9j [736.409671ms] Mar 20 21:33:51.222: INFO: Created: latency-svc-2dqsn Mar 20 21:33:51.225: INFO: Got endpoints: latency-svc-2dqsn [765.901759ms] Mar 20 21:33:51.274: INFO: Created: latency-svc-97l7n Mar 20 21:33:51.286: INFO: Got endpoints: latency-svc-97l7n [727.962382ms] Mar 20 21:33:51.304: INFO: Created: latency-svc-hhtnx Mar 20 21:33:51.316: INFO: Got endpoints: latency-svc-hhtnx [684.595103ms] Mar 20 21:33:51.367: INFO: Created: latency-svc-f9mxl Mar 20 21:33:51.371: INFO: Got endpoints: latency-svc-f9mxl [700.28636ms] Mar 20 21:33:51.392: INFO: Created: latency-svc-57j79 Mar 20 21:33:51.407: INFO: Got endpoints: latency-svc-57j79 [684.932757ms] Mar 20 21:33:51.446: INFO: Created: latency-svc-z5vxm Mar 20 21:33:51.462: INFO: Got endpoints: latency-svc-z5vxm [686.424052ms] Mar 20 21:33:51.509: INFO: Created: latency-svc-dq698 Mar 20 21:33:51.515: INFO: Got endpoints: latency-svc-dq698 [706.098874ms] Mar 20 21:33:51.538: INFO: Created: latency-svc-wlrvz Mar 20 21:33:51.566: INFO: Got endpoints: latency-svc-wlrvz [726.41711ms] Mar 20 21:33:51.596: INFO: Created: latency-svc-sshc8 Mar 20 21:33:51.664: INFO: Got endpoints: latency-svc-sshc8 [748.298833ms] Mar 20 21:33:51.712: INFO: Created: latency-svc-pvndn Mar 20 21:33:51.712: INFO: Created: latency-svc-cmd8p Mar 20 21:33:51.727: INFO: Got endpoints: latency-svc-pvndn [743.11055ms] Mar 20 21:33:51.727: INFO: Got endpoints: latency-svc-cmd8p [772.928011ms] Mar 20 21:33:51.809: INFO: Created: latency-svc-9p8xc Mar 20 21:33:51.836: INFO: Got endpoints: latency-svc-9p8xc [821.619331ms] Mar 20 21:33:51.867: INFO: Created: latency-svc-lhxrw Mar 20 21:33:51.884: INFO: Got endpoints: latency-svc-lhxrw [809.052303ms] Mar 20 21:33:51.903: INFO: Created: latency-svc-gnjjw Mar 20 21:33:51.940: INFO: Got endpoints: latency-svc-gnjjw [816.543222ms] Mar 20 21:33:51.945: INFO: Created: latency-svc-fq9qt Mar 20 21:33:51.962: INFO: Got endpoints: latency-svc-fq9qt [808.460257ms] Mar 20 21:33:51.982: INFO: Created: latency-svc-nxrxb Mar 20 21:33:51.992: INFO: Got endpoints: latency-svc-nxrxb [766.556847ms] Mar 20 21:33:52.010: INFO: Created: latency-svc-26vcw Mar 20 21:33:52.023: INFO: Got endpoints: latency-svc-26vcw [736.204656ms] Mar 20 21:33:52.072: INFO: Created: latency-svc-l25lx Mar 20 21:33:52.075: INFO: Got endpoints: latency-svc-l25lx [758.629679ms] Mar 20 21:33:52.101: INFO: Created: latency-svc-dx6rl Mar 20 21:33:52.119: INFO: Got endpoints: latency-svc-dx6rl [747.886236ms] Mar 20 21:33:52.119: INFO: Latencies: [64.505834ms 106.623124ms 143.424833ms 202.220194ms 244.636838ms 274.667148ms 323.790658ms 376.290334ms 413.002785ms 481.025032ms 517.000853ms 604.992937ms 644.48557ms 658.757193ms 667.287469ms 684.595103ms 684.932757ms 685.319828ms 686.424052ms 687.460119ms 693.85283ms 694.04955ms 694.49334ms 699.168649ms 700.28636ms 705.256856ms 705.313446ms 706.098874ms 708.2514ms 708.267295ms 712.860551ms 717.317854ms 718.12929ms 720.300037ms 721.241463ms 723.855734ms 724.183621ms 724.233429ms 724.233547ms 724.249422ms 724.286766ms 724.549284ms 725.624677ms 726.41711ms 727.648378ms 727.952053ms 727.962382ms 728.067194ms 729.716379ms 730.317227ms 731.236593ms 731.326086ms 733.937394ms 734.265003ms 735.872427ms 735.954377ms 736.063056ms 736.204656ms 736.409671ms 737.454268ms 738.67677ms 739.135584ms 739.288312ms 739.792326ms 741.196519ms 741.750404ms 741.912223ms 742.714955ms 743.01857ms 743.024715ms 743.11055ms 747.886236ms 748.005436ms 748.270838ms 748.298833ms 749.011443ms 749.346617ms 750.319653ms 750.348946ms 753.273537ms 753.411082ms 753.66022ms 754.959317ms 755.353778ms 755.66268ms 756.687396ms 757.113159ms 758.178397ms 758.629679ms 758.825188ms 759.342079ms 759.433487ms 759.845621ms 759.927962ms 764.38154ms 764.698016ms 765.843499ms 765.901759ms 765.929576ms 766.021977ms 766.223767ms 766.556847ms 770.861013ms 771.769546ms 772.928011ms 776.577889ms 777.919025ms 778.220372ms 778.411373ms 778.488554ms 784.279851ms 784.363494ms 784.915119ms 786.764414ms 787.672124ms 788.014648ms 789.518799ms 798.34906ms 799.912859ms 808.460257ms 809.052303ms 814.67266ms 815.671106ms 816.543222ms 821.619331ms 825.884942ms 827.976588ms 831.532543ms 831.611543ms 831.961997ms 832.055937ms 837.093327ms 837.218177ms 838.27619ms 838.692668ms 839.224616ms 839.408738ms 844.993034ms 849.7896ms 849.8967ms 850.359803ms 858.658471ms 858.694292ms 859.590243ms 860.178547ms 861.881641ms 863.395718ms 865.900333ms 867.382671ms 867.866259ms 868.055209ms 869.991652ms 879.347427ms 879.852024ms 883.580113ms 886.308137ms 890.809239ms 892.473725ms 892.538945ms 897.519017ms 900.925981ms 903.003742ms 910.601071ms 915.171527ms 915.82722ms 923.920281ms 927.828853ms 928.56079ms 930.351549ms 945.435265ms 966.734562ms 993.396668ms 1.003480431s 1.007202702s 1.008829008s 1.010205707s 1.010683876s 1.011179846s 1.017178922s 1.034339894s 1.054986067s 1.066647008s 1.123163498s 1.141630385s 1.289095014s 1.409229348s 1.573946028s 1.579835173s 1.587336415s 1.628114754s 1.642871611s 1.658838194s 1.688028213s 1.691645729s 1.710256228s 1.711987811s 1.716480386s 1.718704793s 1.73373385s 1.735810398s] Mar 20 21:33:52.119: INFO: 50 %ile: 766.223767ms Mar 20 21:33:52.119: INFO: 90 %ile: 1.054986067s Mar 20 21:33:52.119: INFO: 99 %ile: 1.73373385s Mar 20 21:33:52.119: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:33:52.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5488" for this suite. • [SLOW TEST:15.857 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":98,"skipped":1554,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:33:52.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d8fbfc64-761e-44c5-aebd-3036676b022d STEP: Creating a pod to test consume configMaps Mar 20 21:33:52.246: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-07c30915-d639-4b2e-a3da-16809e7eb7cd" in namespace "projected-7565" to be "success or failure" Mar 20 21:33:52.274: INFO: Pod "pod-projected-configmaps-07c30915-d639-4b2e-a3da-16809e7eb7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 27.678346ms Mar 20 21:33:54.278: INFO: Pod "pod-projected-configmaps-07c30915-d639-4b2e-a3da-16809e7eb7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032101024s Mar 20 21:33:56.282: INFO: Pod "pod-projected-configmaps-07c30915-d639-4b2e-a3da-16809e7eb7cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036141158s STEP: Saw pod success Mar 20 21:33:56.283: INFO: Pod "pod-projected-configmaps-07c30915-d639-4b2e-a3da-16809e7eb7cd" satisfied condition "success or failure" Mar 20 21:33:56.286: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-07c30915-d639-4b2e-a3da-16809e7eb7cd container projected-configmap-volume-test: STEP: delete the pod Mar 20 21:33:56.337: INFO: Waiting for pod pod-projected-configmaps-07c30915-d639-4b2e-a3da-16809e7eb7cd to disappear Mar 20 21:33:56.389: INFO: Pod pod-projected-configmaps-07c30915-d639-4b2e-a3da-16809e7eb7cd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:33:56.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7565" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1555,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:33:56.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 21:33:56.964: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 21:33:59.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336837, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336837, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336837, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336836, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:34:01.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336837, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336837, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336837, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720336836, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 21:34:04.167: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 20 21:34:05.167: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 20 21:34:06.167: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 20 21:34:07.167: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 20 21:34:08.167: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 20 21:34:09.167: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 20 21:34:10.167: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 20 21:34:11.167: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:34:11.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1459" for this suite. STEP: Destroying namespace "webhook-1459-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.163 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":100,"skipped":1555,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:34:11.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d909b6cd-0a92-4ecf-8205-5a4336124b06 STEP: Creating a pod to test consume configMaps Mar 20 21:34:11.805: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0c160177-4c1c-48f5-a29b-d73c5410d1a0" in namespace "projected-8657" to be "success or failure" Mar 20 21:34:11.841: INFO: Pod "pod-projected-configmaps-0c160177-4c1c-48f5-a29b-d73c5410d1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.157347ms Mar 20 21:34:14.018: INFO: Pod "pod-projected-configmaps-0c160177-4c1c-48f5-a29b-d73c5410d1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213440919s Mar 20 21:34:16.080: INFO: Pod "pod-projected-configmaps-0c160177-4c1c-48f5-a29b-d73c5410d1a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.274917853s STEP: Saw pod success Mar 20 21:34:16.080: INFO: Pod "pod-projected-configmaps-0c160177-4c1c-48f5-a29b-d73c5410d1a0" satisfied condition "success or failure" Mar 20 21:34:16.083: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-0c160177-4c1c-48f5-a29b-d73c5410d1a0 container projected-configmap-volume-test: STEP: delete the pod Mar 20 21:34:16.108: INFO: Waiting for pod pod-projected-configmaps-0c160177-4c1c-48f5-a29b-d73c5410d1a0 to disappear Mar 20 21:34:16.113: INFO: Pod pod-projected-configmaps-0c160177-4c1c-48f5-a29b-d73c5410d1a0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:34:16.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8657" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1556,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:34:16.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-a7fbf836-a923-4e58-8aff-0e7009d67e60 STEP: Creating a pod to test consume secrets Mar 20 21:34:16.205: INFO: Waiting up to 5m0s for pod "pod-secrets-cf749c8e-e550-4174-beb4-8f2faa9ba3bb" in namespace "secrets-3876" to be "success or failure" Mar 20 21:34:16.223: INFO: Pod "pod-secrets-cf749c8e-e550-4174-beb4-8f2faa9ba3bb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.101202ms Mar 20 21:34:18.227: INFO: Pod "pod-secrets-cf749c8e-e550-4174-beb4-8f2faa9ba3bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021938578s Mar 20 21:34:20.231: INFO: Pod "pod-secrets-cf749c8e-e550-4174-beb4-8f2faa9ba3bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025891014s STEP: Saw pod success Mar 20 21:34:20.231: INFO: Pod "pod-secrets-cf749c8e-e550-4174-beb4-8f2faa9ba3bb" satisfied condition "success or failure" Mar 20 21:34:20.234: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-cf749c8e-e550-4174-beb4-8f2faa9ba3bb container secret-volume-test: STEP: delete the pod Mar 20 21:34:20.267: INFO: Waiting for pod pod-secrets-cf749c8e-e550-4174-beb4-8f2faa9ba3bb to disappear Mar 20 21:34:20.275: INFO: Pod pod-secrets-cf749c8e-e550-4174-beb4-8f2faa9ba3bb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:34:20.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3876" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1567,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:34:20.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:34:20.325: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 20 21:34:22.384: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:34:23.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9076" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":103,"skipped":1604,"failed":0} SS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:34:23.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-8268 STEP: creating replication controller nodeport-test in namespace services-8268 I0320 21:34:23.569672 7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-8268, replica count: 2 I0320 21:34:26.620064 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0320 21:34:29.620339 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 20 21:34:29.620: INFO: Creating new exec pod Mar 20 21:34:34.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8268 execpod4dgv7 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 20 21:34:34.860: INFO: stderr: "I0320 21:34:34.794951 896 log.go:172] (0xc00020d080) (0xc00075fb80) Create stream\nI0320 21:34:34.795034 896 log.go:172] (0xc00020d080) (0xc00075fb80) Stream added, broadcasting: 1\nI0320 21:34:34.798277 896 log.go:172] (0xc00020d080) Reply frame received for 1\nI0320 21:34:34.798336 896 log.go:172] (0xc00020d080) (0xc0009c4000) Create stream\nI0320 21:34:34.798350 896 log.go:172] (0xc00020d080) (0xc0009c4000) Stream added, broadcasting: 3\nI0320 21:34:34.799370 896 log.go:172] (0xc00020d080) Reply frame received for 3\nI0320 21:34:34.799403 896 log.go:172] (0xc00020d080) (0xc00075fd60) Create stream\nI0320 21:34:34.799411 896 log.go:172] (0xc00020d080) (0xc00075fd60) Stream added, broadcasting: 5\nI0320 21:34:34.800367 896 log.go:172] (0xc00020d080) Reply frame received for 5\nI0320 21:34:34.853849 896 log.go:172] (0xc00020d080) Data frame received for 5\nI0320 21:34:34.853892 896 log.go:172] (0xc00075fd60) (5) Data frame handling\nI0320 21:34:34.853919 896 log.go:172] (0xc00075fd60) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0320 21:34:34.854243 896 log.go:172] (0xc00020d080) Data frame received for 5\nI0320 21:34:34.854272 896 log.go:172] (0xc00075fd60) (5) Data frame handling\nI0320 21:34:34.854289 896 log.go:172] (0xc00075fd60) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0320 21:34:34.854710 896 log.go:172] (0xc00020d080) Data frame received for 3\nI0320 21:34:34.854739 896 log.go:172] (0xc0009c4000) (3) Data frame handling\nI0320 21:34:34.854760 896 log.go:172] (0xc00020d080) Data frame received for 5\nI0320 21:34:34.854773 896 log.go:172] (0xc00075fd60) (5) Data frame handling\nI0320 21:34:34.856745 896 log.go:172] (0xc00020d080) Data frame received for 1\nI0320 21:34:34.856761 896 log.go:172] (0xc00075fb80) (1) Data frame handling\nI0320 21:34:34.856775 896 log.go:172] (0xc00075fb80) (1) Data frame sent\nI0320 21:34:34.856790 896 log.go:172] (0xc00020d080) (0xc00075fb80) Stream removed, broadcasting: 1\nI0320 21:34:34.857078 896 log.go:172] (0xc00020d080) Go away received\nI0320 21:34:34.857261 896 log.go:172] (0xc00020d080) (0xc00075fb80) Stream removed, broadcasting: 1\nI0320 21:34:34.857281 896 log.go:172] (0xc00020d080) (0xc0009c4000) Stream removed, broadcasting: 3\nI0320 21:34:34.857292 896 log.go:172] (0xc00020d080) (0xc00075fd60) Stream removed, broadcasting: 5\n" Mar 20 21:34:34.861: INFO: stdout: "" Mar 20 21:34:34.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8268 execpod4dgv7 -- /bin/sh -x -c nc -zv -t -w 2 10.103.163.81 80' Mar 20 21:34:35.063: INFO: stderr: "I0320 21:34:34.990582 918 log.go:172] (0xc000a22370) (0xc00094a1e0) Create stream\nI0320 21:34:34.991467 918 log.go:172] (0xc000a22370) (0xc00094a1e0) Stream added, broadcasting: 1\nI0320 21:34:34.994658 918 log.go:172] (0xc000a22370) Reply frame received for 1\nI0320 21:34:34.994726 918 log.go:172] (0xc000a22370) (0xc0005ec780) Create stream\nI0320 21:34:34.994748 918 log.go:172] (0xc000a22370) (0xc0005ec780) Stream added, broadcasting: 3\nI0320 21:34:34.995759 918 log.go:172] (0xc000a22370) Reply frame received for 3\nI0320 21:34:34.995818 918 log.go:172] (0xc000a22370) (0xc000315540) Create stream\nI0320 21:34:34.995844 918 log.go:172] (0xc000a22370) (0xc000315540) Stream added, broadcasting: 5\nI0320 21:34:34.996979 918 log.go:172] (0xc000a22370) Reply frame received for 5\nI0320 21:34:35.056182 918 log.go:172] (0xc000a22370) Data frame received for 5\nI0320 21:34:35.056231 918 log.go:172] (0xc000a22370) Data frame received for 3\nI0320 21:34:35.056277 918 log.go:172] (0xc0005ec780) (3) Data frame handling\nI0320 21:34:35.056320 918 log.go:172] (0xc000315540) (5) Data frame handling\nI0320 21:34:35.056352 918 log.go:172] (0xc000315540) (5) Data frame sent\nI0320 21:34:35.056372 918 log.go:172] (0xc000a22370) Data frame received for 5\nI0320 21:34:35.056389 918 log.go:172] (0xc000315540) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.163.81 80\nConnection to 10.103.163.81 80 port [tcp/http] succeeded!\nI0320 21:34:35.058041 918 log.go:172] (0xc000a22370) Data frame received for 1\nI0320 21:34:35.058077 918 log.go:172] (0xc00094a1e0) (1) Data frame handling\nI0320 21:34:35.058111 918 log.go:172] (0xc00094a1e0) (1) Data frame sent\nI0320 21:34:35.058136 918 log.go:172] (0xc000a22370) (0xc00094a1e0) Stream removed, broadcasting: 1\nI0320 21:34:35.058167 918 log.go:172] (0xc000a22370) Go away received\nI0320 21:34:35.058583 918 log.go:172] (0xc000a22370) (0xc00094a1e0) Stream removed, broadcasting: 1\nI0320 21:34:35.058613 918 log.go:172] (0xc000a22370) (0xc0005ec780) Stream removed, broadcasting: 3\nI0320 21:34:35.058632 918 log.go:172] (0xc000a22370) (0xc000315540) Stream removed, broadcasting: 5\n" Mar 20 21:34:35.063: INFO: stdout: "" Mar 20 21:34:35.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8268 execpod4dgv7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30437' Mar 20 21:34:35.258: INFO: stderr: "I0320 21:34:35.187386 940 log.go:172] (0xc000a88000) (0xc0006bc8c0) Create stream\nI0320 21:34:35.187463 940 log.go:172] (0xc000a88000) (0xc0006bc8c0) Stream added, broadcasting: 1\nI0320 21:34:35.192319 940 log.go:172] (0xc000a88000) Reply frame received for 1\nI0320 21:34:35.192415 940 log.go:172] (0xc000a88000) (0xc00053b680) Create stream\nI0320 21:34:35.192460 940 log.go:172] (0xc000a88000) (0xc00053b680) Stream added, broadcasting: 3\nI0320 21:34:35.194045 940 log.go:172] (0xc000a88000) Reply frame received for 3\nI0320 21:34:35.194087 940 log.go:172] (0xc000a88000) (0xc000725e00) Create stream\nI0320 21:34:35.194097 940 log.go:172] (0xc000a88000) (0xc000725e00) Stream added, broadcasting: 5\nI0320 21:34:35.195278 940 log.go:172] (0xc000a88000) Reply frame received for 5\nI0320 21:34:35.253017 940 log.go:172] (0xc000a88000) Data frame received for 3\nI0320 21:34:35.253046 940 log.go:172] (0xc00053b680) (3) Data frame handling\nI0320 21:34:35.253080 940 log.go:172] (0xc000a88000) Data frame received for 5\nI0320 21:34:35.253107 940 log.go:172] (0xc000725e00) (5) Data frame handling\nI0320 21:34:35.253258 940 log.go:172] (0xc000725e00) (5) Data frame sent\nI0320 21:34:35.253268 940 log.go:172] (0xc000a88000) Data frame received for 5\nI0320 21:34:35.253273 940 log.go:172] (0xc000725e00) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30437\nConnection to 172.17.0.10 30437 port [tcp/30437] succeeded!\nI0320 21:34:35.254534 940 log.go:172] (0xc000a88000) Data frame received for 1\nI0320 21:34:35.254552 940 log.go:172] (0xc0006bc8c0) (1) Data frame handling\nI0320 21:34:35.254567 940 log.go:172] (0xc0006bc8c0) (1) Data frame sent\nI0320 21:34:35.254576 940 log.go:172] (0xc000a88000) (0xc0006bc8c0) Stream removed, broadcasting: 1\nI0320 21:34:35.254836 940 log.go:172] (0xc000a88000) (0xc0006bc8c0) Stream removed, broadcasting: 1\nI0320 21:34:35.254849 940 log.go:172] (0xc000a88000) (0xc00053b680) Stream removed, broadcasting: 3\nI0320 21:34:35.254855 940 log.go:172] (0xc000a88000) (0xc000725e00) Stream removed, broadcasting: 5\n" Mar 20 21:34:35.258: INFO: stdout: "" Mar 20 21:34:35.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8268 execpod4dgv7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30437' Mar 20 21:34:35.476: INFO: stderr: "I0320 21:34:35.394648 962 log.go:172] (0xc0005006e0) (0xc000a22000) Create stream\nI0320 21:34:35.394729 962 log.go:172] (0xc0005006e0) (0xc000a22000) Stream added, broadcasting: 1\nI0320 21:34:35.397890 962 log.go:172] (0xc0005006e0) Reply frame received for 1\nI0320 21:34:35.397927 962 log.go:172] (0xc0005006e0) (0xc0006bfa40) Create stream\nI0320 21:34:35.397939 962 log.go:172] (0xc0005006e0) (0xc0006bfa40) Stream added, broadcasting: 3\nI0320 21:34:35.399057 962 log.go:172] (0xc0005006e0) Reply frame received for 3\nI0320 21:34:35.399114 962 log.go:172] (0xc0005006e0) (0xc000a220a0) Create stream\nI0320 21:34:35.399128 962 log.go:172] (0xc0005006e0) (0xc000a220a0) Stream added, broadcasting: 5\nI0320 21:34:35.400066 962 log.go:172] (0xc0005006e0) Reply frame received for 5\nI0320 21:34:35.468721 962 log.go:172] (0xc0005006e0) Data frame received for 5\nI0320 21:34:35.468741 962 log.go:172] (0xc000a220a0) (5) Data frame handling\nI0320 21:34:35.468749 962 log.go:172] (0xc000a220a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 30437\nConnection to 172.17.0.8 30437 port [tcp/30437] succeeded!\nI0320 21:34:35.468876 962 log.go:172] (0xc0005006e0) Data frame received for 5\nI0320 21:34:35.468893 962 log.go:172] (0xc000a220a0) (5) Data frame handling\nI0320 21:34:35.468938 962 log.go:172] (0xc0005006e0) Data frame received for 3\nI0320 21:34:35.468967 962 log.go:172] (0xc0006bfa40) (3) Data frame handling\nI0320 21:34:35.470607 962 log.go:172] (0xc0005006e0) Data frame received for 1\nI0320 21:34:35.470639 962 log.go:172] (0xc000a22000) (1) Data frame handling\nI0320 21:34:35.470672 962 log.go:172] (0xc000a22000) (1) Data frame sent\nI0320 21:34:35.470693 962 log.go:172] (0xc0005006e0) (0xc000a22000) Stream removed, broadcasting: 1\nI0320 21:34:35.470804 962 log.go:172] (0xc0005006e0) Go away received\nI0320 21:34:35.471134 962 log.go:172] (0xc0005006e0) (0xc000a22000) Stream removed, broadcasting: 1\nI0320 21:34:35.471160 962 log.go:172] (0xc0005006e0) (0xc0006bfa40) Stream removed, broadcasting: 3\nI0320 21:34:35.471194 962 log.go:172] (0xc0005006e0) (0xc000a220a0) Stream removed, broadcasting: 5\n" Mar 20 21:34:35.476: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:34:35.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8268" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.084 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":104,"skipped":1606,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:34:35.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:34:35.554: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c94c9e80-79d3-48c2-bbab-90885432242b" in namespace "security-context-test-8794" to be "success or failure" Mar 20 21:34:35.560: INFO: Pod "alpine-nnp-false-c94c9e80-79d3-48c2-bbab-90885432242b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127896ms Mar 20 21:34:37.564: INFO: Pod "alpine-nnp-false-c94c9e80-79d3-48c2-bbab-90885432242b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010592358s Mar 20 21:34:39.569: INFO: Pod "alpine-nnp-false-c94c9e80-79d3-48c2-bbab-90885432242b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014901984s Mar 20 21:34:39.569: INFO: Pod "alpine-nnp-false-c94c9e80-79d3-48c2-bbab-90885432242b" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:34:39.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8794" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1623,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:34:39.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 20 21:34:39.640: INFO: Waiting up to 5m0s for pod "var-expansion-6cd951bb-2b54-498e-a886-7bce73d6fbcc" in namespace "var-expansion-1688" to be "success or failure" Mar 20 21:34:39.644: INFO: Pod "var-expansion-6cd951bb-2b54-498e-a886-7bce73d6fbcc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.580732ms Mar 20 21:34:41.743: INFO: Pod "var-expansion-6cd951bb-2b54-498e-a886-7bce73d6fbcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102944101s Mar 20 21:34:43.768: INFO: Pod "var-expansion-6cd951bb-2b54-498e-a886-7bce73d6fbcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127226432s STEP: Saw pod success Mar 20 21:34:43.768: INFO: Pod "var-expansion-6cd951bb-2b54-498e-a886-7bce73d6fbcc" satisfied condition "success or failure" Mar 20 21:34:43.771: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-6cd951bb-2b54-498e-a886-7bce73d6fbcc container dapi-container: STEP: delete the pod Mar 20 21:34:43.804: INFO: Waiting for pod var-expansion-6cd951bb-2b54-498e-a886-7bce73d6fbcc to disappear Mar 20 21:34:43.815: INFO: Pod var-expansion-6cd951bb-2b54-498e-a886-7bce73d6fbcc no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:34:43.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1688" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1624,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:34:43.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-2183 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2183 STEP: Deleting pre-stop pod Mar 20 21:34:57.152: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:34:57.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2183" for this suite. • [SLOW TEST:13.394 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":107,"skipped":1630,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:34:57.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:34:57.265: INFO: Creating ReplicaSet my-hostname-basic-6a844469-2617-43d0-8ece-d77ffac57e4a Mar 20 21:34:57.285: INFO: Pod name my-hostname-basic-6a844469-2617-43d0-8ece-d77ffac57e4a: Found 0 pods out of 1 Mar 20 21:35:02.289: INFO: Pod name my-hostname-basic-6a844469-2617-43d0-8ece-d77ffac57e4a: Found 1 pods out of 1 Mar 20 21:35:02.289: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-6a844469-2617-43d0-8ece-d77ffac57e4a" is running Mar 20 21:35:02.292: INFO: Pod "my-hostname-basic-6a844469-2617-43d0-8ece-d77ffac57e4a-hvg4d" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-20 21:34:57 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-20 21:34:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-20 21:34:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-20 21:34:57 +0000 UTC Reason: Message:}]) Mar 20 21:35:02.292: INFO: Trying to dial the pod Mar 20 21:35:07.303: INFO: Controller my-hostname-basic-6a844469-2617-43d0-8ece-d77ffac57e4a: Got expected result from replica 1 [my-hostname-basic-6a844469-2617-43d0-8ece-d77ffac57e4a-hvg4d]: "my-hostname-basic-6a844469-2617-43d0-8ece-d77ffac57e4a-hvg4d", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:35:07.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-461" for this suite. • [SLOW TEST:10.093 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":108,"skipped":1648,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:35:07.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:35:07.390: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 20 21:35:07.396: INFO: Number of nodes with available pods: 0 Mar 20 21:35:07.396: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 20 21:35:07.496: INFO: Number of nodes with available pods: 0 Mar 20 21:35:07.496: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:08.501: INFO: Number of nodes with available pods: 0 Mar 20 21:35:08.501: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:09.500: INFO: Number of nodes with available pods: 0 Mar 20 21:35:09.500: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:10.500: INFO: Number of nodes with available pods: 0 Mar 20 21:35:10.500: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:11.500: INFO: Number of nodes with available pods: 1 Mar 20 21:35:11.500: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 20 21:35:11.531: INFO: Number of nodes with available pods: 1 Mar 20 21:35:11.531: INFO: Number of running nodes: 0, number of available pods: 1 Mar 20 21:35:12.541: INFO: Number of nodes with available pods: 0 Mar 20 21:35:12.541: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 20 21:35:12.601: INFO: Number of nodes with available pods: 0 Mar 20 21:35:12.601: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:13.606: INFO: Number of nodes with available pods: 0 Mar 20 21:35:13.606: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:14.606: INFO: Number of nodes with available pods: 0 Mar 20 21:35:14.606: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:15.606: INFO: Number of nodes with available pods: 0 Mar 20 21:35:15.606: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:16.606: INFO: Number of nodes with available pods: 0 Mar 20 21:35:16.606: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:17.606: INFO: Number of nodes with available pods: 0 Mar 20 21:35:17.606: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:18.606: INFO: Number of nodes with available pods: 0 Mar 20 21:35:18.606: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:19.606: INFO: Number of nodes with available pods: 0 Mar 20 21:35:19.606: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:20.606: INFO: Number of nodes with available pods: 0 Mar 20 21:35:20.606: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:21.606: INFO: Number of nodes with available pods: 0 Mar 20 21:35:21.606: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:35:22.606: INFO: Number of nodes with available pods: 1 Mar 20 21:35:22.606: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4558, will wait for the garbage collector to delete the pods Mar 20 21:35:22.672: INFO: Deleting DaemonSet.extensions daemon-set took: 6.937594ms Mar 20 21:35:22.772: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.199549ms Mar 20 21:35:25.875: INFO: Number of nodes with available pods: 0 Mar 20 21:35:25.875: INFO: Number of running nodes: 0, number of available pods: 0 Mar 20 21:35:25.878: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4558/daemonsets","resourceVersion":"1385312"},"items":null} Mar 20 21:35:25.881: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4558/pods","resourceVersion":"1385312"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:35:25.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4558" for this suite. • [SLOW TEST:18.626 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":109,"skipped":1658,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:35:25.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 20 21:35:25.997: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 20 21:35:26.035: INFO: Waiting for terminating namespaces to be deleted... Mar 20 21:35:26.047: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 20 21:35:26.063: INFO: tester from prestop-2183 started at 2020-03-20 21:34:48 +0000 UTC (1 container statuses recorded) Mar 20 21:35:26.063: INFO: Container tester ready: false, restart count 0 Mar 20 21:35:26.063: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:35:26.063: INFO: Container kindnet-cni ready: true, restart count 0 Mar 20 21:35:26.063: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:35:26.063: INFO: Container kube-proxy ready: true, restart count 0 Mar 20 21:35:26.063: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 20 21:35:26.068: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:35:26.068: INFO: Container kindnet-cni ready: true, restart count 0 Mar 20 21:35:26.068: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 20 21:35:26.068: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b422b963-3458-45ae-8196-908e3c24b5c5 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-b422b963-3458-45ae-8196-908e3c24b5c5 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b422b963-3458-45ae-8196-908e3c24b5c5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:40:34.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6252" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.357 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":110,"skipped":1659,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:40:34.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-nrnv STEP: Creating a pod to test atomic-volume-subpath Mar 20 21:40:34.394: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-nrnv" in namespace "subpath-5486" to be "success or failure" Mar 20 21:40:34.397: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.315672ms Mar 20 21:40:36.400: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006237441s Mar 20 21:40:38.405: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Running", Reason="", readiness=true. Elapsed: 4.01080797s Mar 20 21:40:40.409: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Running", Reason="", readiness=true. Elapsed: 6.014373195s Mar 20 21:40:42.412: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Running", Reason="", readiness=true. Elapsed: 8.017705529s Mar 20 21:40:44.416: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Running", Reason="", readiness=true. Elapsed: 10.021960845s Mar 20 21:40:46.419: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Running", Reason="", readiness=true. Elapsed: 12.02503743s Mar 20 21:40:48.424: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Running", Reason="", readiness=true. Elapsed: 14.029462291s Mar 20 21:40:50.428: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Running", Reason="", readiness=true. Elapsed: 16.033544118s Mar 20 21:40:52.432: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Running", Reason="", readiness=true. Elapsed: 18.037691675s Mar 20 21:40:54.436: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Running", Reason="", readiness=true. Elapsed: 20.041777427s Mar 20 21:40:56.440: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Running", Reason="", readiness=true. Elapsed: 22.045928431s Mar 20 21:40:58.444: INFO: Pod "pod-subpath-test-secret-nrnv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.0499695s STEP: Saw pod success Mar 20 21:40:58.444: INFO: Pod "pod-subpath-test-secret-nrnv" satisfied condition "success or failure" Mar 20 21:40:58.448: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-nrnv container test-container-subpath-secret-nrnv: STEP: delete the pod Mar 20 21:40:58.528: INFO: Waiting for pod pod-subpath-test-secret-nrnv to disappear Mar 20 21:40:58.535: INFO: Pod pod-subpath-test-secret-nrnv no longer exists STEP: Deleting pod pod-subpath-test-secret-nrnv Mar 20 21:40:58.535: INFO: Deleting pod "pod-subpath-test-secret-nrnv" in namespace "subpath-5486" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:40:58.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5486" for this suite. • [SLOW TEST:24.252 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":111,"skipped":1666,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:40:58.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 20 21:40:58.658: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6138 /api/v1/namespaces/watch-6138/configmaps/e2e-watch-test-label-changed 183a8265-8ab1-4b0b-b564-8fc89380de07 1386373 0 2020-03-20 21:40:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 20 21:40:58.658: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6138 /api/v1/namespaces/watch-6138/configmaps/e2e-watch-test-label-changed 183a8265-8ab1-4b0b-b564-8fc89380de07 1386374 0 2020-03-20 21:40:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 20 21:40:58.658: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6138 /api/v1/namespaces/watch-6138/configmaps/e2e-watch-test-label-changed 183a8265-8ab1-4b0b-b564-8fc89380de07 1386375 0 2020-03-20 21:40:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 20 21:41:08.694: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6138 /api/v1/namespaces/watch-6138/configmaps/e2e-watch-test-label-changed 183a8265-8ab1-4b0b-b564-8fc89380de07 1386417 0 2020-03-20 21:40:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 20 21:41:08.694: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6138 /api/v1/namespaces/watch-6138/configmaps/e2e-watch-test-label-changed 183a8265-8ab1-4b0b-b564-8fc89380de07 1386418 0 2020-03-20 21:40:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 20 21:41:08.694: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6138 /api/v1/namespaces/watch-6138/configmaps/e2e-watch-test-label-changed 183a8265-8ab1-4b0b-b564-8fc89380de07 1386419 0 2020-03-20 21:40:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:41:08.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6138" for this suite. • [SLOW TEST:10.176 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":112,"skipped":1686,"failed":0} SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:41:08.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:41:08.769: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:41:12.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3420" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1688,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:41:12.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:41:12.954: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:41:18.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1203" for this suite. • [SLOW TEST:6.097 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":114,"skipped":1693,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:41:18.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 20 21:41:19.381: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 20 21:41:21.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337279, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337279, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337279, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337279, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 21:41:24.450: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:41:24.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:41:25.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6045" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.757 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":115,"skipped":1701,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:41:25.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5911.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5911.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5911.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5911.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5911.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5911.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 20 21:41:32.098: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:32.102: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:32.104: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:32.108: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:32.116: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:32.120: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:32.123: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:32.126: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:32.131: INFO: Lookups using dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local] Mar 20 21:41:37.136: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:37.139: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:37.143: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:37.146: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:37.157: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:37.161: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:37.164: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:37.168: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:37.174: INFO: Lookups using dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local] Mar 20 21:41:42.136: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:42.151: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:42.153: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:42.156: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:42.164: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:42.167: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:42.169: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:42.172: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:42.179: INFO: Lookups using dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local] Mar 20 21:41:47.182: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:47.186: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:47.197: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:47.200: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:47.209: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:47.211: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:47.214: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:47.217: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:47.223: INFO: Lookups using dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local] Mar 20 21:41:52.136: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:52.143: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:52.163: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:52.166: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:52.199: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:52.202: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:52.205: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:52.208: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:52.215: INFO: Lookups using dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local] Mar 20 21:41:57.136: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:57.139: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:57.142: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:57.144: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:57.152: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:57.155: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:57.157: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:57.160: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local from pod dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a: the server could not find the requested resource (get pods dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a) Mar 20 21:41:57.164: INFO: Lookups using dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5911.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5911.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5911.svc.cluster.local jessie_udp@dns-test-service-2.dns-5911.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5911.svc.cluster.local] Mar 20 21:42:02.198: INFO: DNS probes using dns-5911/dns-test-33cfffa2-6f03-4603-8ef3-ae6d0aa7bd3a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:42:02.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5911" for this suite. • [SLOW TEST:37.010 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":116,"skipped":1718,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:42:02.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-f6f38a94-df8f-4a5d-9e30-7fa45dd86582 STEP: Creating secret with name s-test-opt-upd-2059ec9b-52b7-4e26-b9fd-27b4270cf14a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f6f38a94-df8f-4a5d-9e30-7fa45dd86582 STEP: Updating secret s-test-opt-upd-2059ec9b-52b7-4e26-b9fd-27b4270cf14a STEP: Creating secret with name s-test-opt-create-d267de32-c7dd-4c24-b737-c1253b10c704 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:43:31.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7086" for this suite. • [SLOW TEST:88.606 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1727,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:43:31.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:43:31.395: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8b44d548-8318-41b0-ac88-1500f704aec2", Controller:(*bool)(0xc000763ee2), BlockOwnerDeletion:(*bool)(0xc000763ee3)}} Mar 20 21:43:31.434: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b0c1f11f-90e6-4a43-8319-4e0283242c07", Controller:(*bool)(0xc004c7d1da), BlockOwnerDeletion:(*bool)(0xc004c7d1db)}} Mar 20 21:43:31.449: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"50085ee8-9b35-44e8-85c8-5f95f2789042", Controller:(*bool)(0xc0037181a2), BlockOwnerDeletion:(*bool)(0xc0037181a3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:43:36.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3561" for this suite. • [SLOW TEST:5.297 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":118,"skipped":1729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:43:36.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:43:41.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2625" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1755,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:43:41.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 20 21:43:41.375: INFO: Waiting up to 5m0s for pod "var-expansion-1494b4d0-cba2-4258-816c-d984029dc715" in namespace "var-expansion-5268" to be "success or failure" Mar 20 21:43:41.383: INFO: Pod "var-expansion-1494b4d0-cba2-4258-816c-d984029dc715": Phase="Pending", Reason="", readiness=false. Elapsed: 7.606182ms Mar 20 21:43:43.410: INFO: Pod "var-expansion-1494b4d0-cba2-4258-816c-d984029dc715": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035153363s Mar 20 21:43:45.414: INFO: Pod "var-expansion-1494b4d0-cba2-4258-816c-d984029dc715": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038910581s STEP: Saw pod success Mar 20 21:43:45.414: INFO: Pod "var-expansion-1494b4d0-cba2-4258-816c-d984029dc715" satisfied condition "success or failure" Mar 20 21:43:45.416: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-1494b4d0-cba2-4258-816c-d984029dc715 container dapi-container: STEP: delete the pod Mar 20 21:43:45.432: INFO: Waiting for pod var-expansion-1494b4d0-cba2-4258-816c-d984029dc715 to disappear Mar 20 21:43:45.437: INFO: Pod var-expansion-1494b4d0-cba2-4258-816c-d984029dc715 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:43:45.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5268" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1761,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:43:45.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-00232009-d146-461e-b875-a9ce0b5dac55 STEP: Creating a pod to test consume configMaps Mar 20 21:43:45.541: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-537f2675-ca6a-4541-9fb8-172878e3d5ff" in namespace "projected-9940" to be "success or failure" Mar 20 21:43:45.545: INFO: Pod "pod-projected-configmaps-537f2675-ca6a-4541-9fb8-172878e3d5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.63415ms Mar 20 21:43:47.563: INFO: Pod "pod-projected-configmaps-537f2675-ca6a-4541-9fb8-172878e3d5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022072224s Mar 20 21:43:49.567: INFO: Pod "pod-projected-configmaps-537f2675-ca6a-4541-9fb8-172878e3d5ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025778056s STEP: Saw pod success Mar 20 21:43:49.567: INFO: Pod "pod-projected-configmaps-537f2675-ca6a-4541-9fb8-172878e3d5ff" satisfied condition "success or failure" Mar 20 21:43:49.570: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-537f2675-ca6a-4541-9fb8-172878e3d5ff container projected-configmap-volume-test: STEP: delete the pod Mar 20 21:43:49.624: INFO: Waiting for pod pod-projected-configmaps-537f2675-ca6a-4541-9fb8-172878e3d5ff to disappear Mar 20 21:43:49.629: INFO: Pod pod-projected-configmaps-537f2675-ca6a-4541-9fb8-172878e3d5ff no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:43:49.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9940" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1762,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:43:49.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:43:53.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6284" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":122,"skipped":1781,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:43:53.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-38e78198-d5c7-4d8b-926b-75c06458697c STEP: Creating a pod to test consume configMaps Mar 20 21:43:53.914: INFO: Waiting up to 5m0s for pod "pod-configmaps-86dc25b8-a0d1-452f-8b53-6d1572e6e352" in namespace "configmap-3239" to be "success or failure" Mar 20 21:43:53.918: INFO: Pod "pod-configmaps-86dc25b8-a0d1-452f-8b53-6d1572e6e352": Phase="Pending", Reason="", readiness=false. Elapsed: 3.699076ms Mar 20 21:43:55.921: INFO: Pod "pod-configmaps-86dc25b8-a0d1-452f-8b53-6d1572e6e352": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007368163s Mar 20 21:43:57.925: INFO: Pod "pod-configmaps-86dc25b8-a0d1-452f-8b53-6d1572e6e352": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0115885s STEP: Saw pod success Mar 20 21:43:57.926: INFO: Pod "pod-configmaps-86dc25b8-a0d1-452f-8b53-6d1572e6e352" satisfied condition "success or failure" Mar 20 21:43:57.929: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-86dc25b8-a0d1-452f-8b53-6d1572e6e352 container configmap-volume-test: STEP: delete the pod Mar 20 21:43:57.962: INFO: Waiting for pod pod-configmaps-86dc25b8-a0d1-452f-8b53-6d1572e6e352 to disappear Mar 20 21:43:57.978: INFO: Pod pod-configmaps-86dc25b8-a0d1-452f-8b53-6d1572e6e352 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:43:57.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3239" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:43:57.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 20 21:43:58.058: INFO: Waiting up to 5m0s for pod "pod-10b4b8a8-a80d-4e52-b045-4858a79e7ed0" in namespace "emptydir-6604" to be "success or failure" Mar 20 21:43:58.068: INFO: Pod "pod-10b4b8a8-a80d-4e52-b045-4858a79e7ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.402613ms Mar 20 21:44:00.084: INFO: Pod "pod-10b4b8a8-a80d-4e52-b045-4858a79e7ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025767768s Mar 20 21:44:02.088: INFO: Pod "pod-10b4b8a8-a80d-4e52-b045-4858a79e7ed0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030112367s STEP: Saw pod success Mar 20 21:44:02.088: INFO: Pod "pod-10b4b8a8-a80d-4e52-b045-4858a79e7ed0" satisfied condition "success or failure" Mar 20 21:44:02.092: INFO: Trying to get logs from node jerma-worker pod pod-10b4b8a8-a80d-4e52-b045-4858a79e7ed0 container test-container: STEP: delete the pod Mar 20 21:44:02.124: INFO: Waiting for pod pod-10b4b8a8-a80d-4e52-b045-4858a79e7ed0 to disappear Mar 20 21:44:02.138: INFO: Pod pod-10b4b8a8-a80d-4e52-b045-4858a79e7ed0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:44:02.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6604" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1811,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:44:02.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-7w7z STEP: Creating a pod to test atomic-volume-subpath Mar 20 21:44:02.242: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7w7z" in namespace "subpath-5875" to be "success or failure" Mar 20 21:44:02.246: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Pending", Reason="", readiness=false. Elapsed: 3.627605ms Mar 20 21:44:04.250: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007787983s Mar 20 21:44:06.254: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Running", Reason="", readiness=true. Elapsed: 4.01217999s Mar 20 21:44:08.272: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Running", Reason="", readiness=true. Elapsed: 6.029748718s Mar 20 21:44:10.276: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Running", Reason="", readiness=true. Elapsed: 8.033686425s Mar 20 21:44:12.279: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Running", Reason="", readiness=true. Elapsed: 10.037244707s Mar 20 21:44:14.283: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Running", Reason="", readiness=true. Elapsed: 12.041179972s Mar 20 21:44:16.286: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Running", Reason="", readiness=true. Elapsed: 14.043993162s Mar 20 21:44:18.290: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Running", Reason="", readiness=true. Elapsed: 16.047980966s Mar 20 21:44:20.295: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Running", Reason="", readiness=true. Elapsed: 18.052515926s Mar 20 21:44:22.299: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Running", Reason="", readiness=true. Elapsed: 20.05664806s Mar 20 21:44:24.302: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Running", Reason="", readiness=true. Elapsed: 22.060031666s Mar 20 21:44:26.306: INFO: Pod "pod-subpath-test-projected-7w7z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.06346466s STEP: Saw pod success Mar 20 21:44:26.306: INFO: Pod "pod-subpath-test-projected-7w7z" satisfied condition "success or failure" Mar 20 21:44:26.309: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-7w7z container test-container-subpath-projected-7w7z: STEP: delete the pod Mar 20 21:44:26.339: INFO: Waiting for pod pod-subpath-test-projected-7w7z to disappear Mar 20 21:44:26.349: INFO: Pod pod-subpath-test-projected-7w7z no longer exists STEP: Deleting pod pod-subpath-test-projected-7w7z Mar 20 21:44:26.349: INFO: Deleting pod "pod-subpath-test-projected-7w7z" in namespace "subpath-5875" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:44:26.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5875" for this suite. • [SLOW TEST:24.207 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":125,"skipped":1824,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:44:26.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-4253 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4253 to expose endpoints map[] Mar 20 21:44:26.502: INFO: Get endpoints failed (23.474173ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 20 21:44:27.506: INFO: successfully validated that service multi-endpoint-test in namespace services-4253 exposes endpoints map[] (1.027233045s elapsed) STEP: Creating pod pod1 in namespace services-4253 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4253 to expose endpoints map[pod1:[100]] Mar 20 21:44:30.546: INFO: successfully validated that service multi-endpoint-test in namespace services-4253 exposes endpoints map[pod1:[100]] (3.032711558s elapsed) STEP: Creating pod pod2 in namespace services-4253 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4253 to expose endpoints map[pod1:[100] pod2:[101]] Mar 20 21:44:34.630: INFO: successfully validated that service multi-endpoint-test in namespace services-4253 exposes endpoints map[pod1:[100] pod2:[101]] (4.078343518s elapsed) STEP: Deleting pod pod1 in namespace services-4253 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4253 to expose endpoints map[pod2:[101]] Mar 20 21:44:35.670: INFO: successfully validated that service multi-endpoint-test in namespace services-4253 exposes endpoints map[pod2:[101]] (1.035941982s elapsed) STEP: Deleting pod pod2 in namespace services-4253 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4253 to expose endpoints map[] Mar 20 21:44:36.682: INFO: successfully validated that service multi-endpoint-test in namespace services-4253 exposes endpoints map[] (1.006587846s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:44:36.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4253" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.373 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":126,"skipped":1846,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:44:36.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:45:36.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5982" for this suite. • [SLOW TEST:60.153 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":1929,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:45:36.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 20 21:45:36.981: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:36.986: INFO: Number of nodes with available pods: 0 Mar 20 21:45:36.986: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:45:37.991: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:37.995: INFO: Number of nodes with available pods: 0 Mar 20 21:45:37.995: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:45:38.991: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:38.995: INFO: Number of nodes with available pods: 0 Mar 20 21:45:38.995: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:45:39.991: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:39.994: INFO: Number of nodes with available pods: 1 Mar 20 21:45:39.994: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:45:40.989: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:40.992: INFO: Number of nodes with available pods: 2 Mar 20 21:45:40.992: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 20 21:45:41.033: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:41.036: INFO: Number of nodes with available pods: 1 Mar 20 21:45:41.036: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:45:42.045: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:42.047: INFO: Number of nodes with available pods: 1 Mar 20 21:45:42.047: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:45:43.040: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:43.044: INFO: Number of nodes with available pods: 1 Mar 20 21:45:43.044: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:45:44.041: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:44.044: INFO: Number of nodes with available pods: 1 Mar 20 21:45:44.044: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:45:45.040: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:45.043: INFO: Number of nodes with available pods: 1 Mar 20 21:45:45.043: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:45:46.041: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:46.045: INFO: Number of nodes with available pods: 1 Mar 20 21:45:46.045: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:45:47.041: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:47.044: INFO: Number of nodes with available pods: 1 Mar 20 21:45:47.044: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:45:48.041: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:48.044: INFO: Number of nodes with available pods: 1 Mar 20 21:45:48.044: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:45:49.041: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:49.044: INFO: Number of nodes with available pods: 1 Mar 20 21:45:49.044: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:45:50.052: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:50.055: INFO: Number of nodes with available pods: 1 Mar 20 21:45:50.055: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:45:51.041: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:51.045: INFO: Number of nodes with available pods: 1 Mar 20 21:45:51.045: INFO: Node jerma-worker2 is running more than one daemon pod Mar 20 21:45:52.041: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:45:52.044: INFO: Number of nodes with available pods: 2 Mar 20 21:45:52.044: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7214, will wait for the garbage collector to delete the pods Mar 20 21:45:52.107: INFO: Deleting DaemonSet.extensions daemon-set took: 6.057008ms Mar 20 21:45:52.507: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.312778ms Mar 20 21:45:59.311: INFO: Number of nodes with available pods: 0 Mar 20 21:45:59.311: INFO: Number of running nodes: 0, number of available pods: 0 Mar 20 21:45:59.314: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7214/daemonsets","resourceVersion":"1387890"},"items":null} Mar 20 21:45:59.317: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7214/pods","resourceVersion":"1387890"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:45:59.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7214" for this suite. • [SLOW TEST:22.450 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":128,"skipped":1933,"failed":0} [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:45:59.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:45:59.409: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 20 21:46:04.423: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 20 21:46:04.423: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 20 21:46:06.432: INFO: Creating deployment "test-rollover-deployment" Mar 20 21:46:06.448: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 20 21:46:08.455: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 20 21:46:08.461: INFO: Ensure that both replica sets have 1 created replica Mar 20 21:46:08.478: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 20 21:46:08.484: INFO: Updating deployment test-rollover-deployment Mar 20 21:46:08.484: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 20 21:46:10.525: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 20 21:46:10.531: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 20 21:46:10.537: INFO: all replica sets need to contain the pod-template-hash label Mar 20 21:46:10.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337568, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:46:12.545: INFO: all replica sets need to contain the pod-template-hash label Mar 20 21:46:12.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337571, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:46:14.544: INFO: all replica sets need to contain the pod-template-hash label Mar 20 21:46:14.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337571, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:46:16.545: INFO: all replica sets need to contain the pod-template-hash label Mar 20 21:46:16.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337571, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:46:18.567: INFO: all replica sets need to contain the pod-template-hash label Mar 20 21:46:18.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337571, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:46:20.556: INFO: all replica sets need to contain the pod-template-hash label Mar 20 21:46:20.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337571, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337566, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:46:22.545: INFO: Mar 20 21:46:22.545: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 20 21:46:22.554: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1511 /apis/apps/v1/namespaces/deployment-1511/deployments/test-rollover-deployment 083eb9f6-4584-40c8-9e61-6edad7bf86cd 1388054 2 2020-03-20 21:46:06 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00387ad58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-20 21:46:06 +0000 UTC,LastTransitionTime:2020-03-20 21:46:06 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-20 21:46:21 +0000 UTC,LastTransitionTime:2020-03-20 21:46:06 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 20 21:46:22.558: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-1511 /apis/apps/v1/namespaces/deployment-1511/replicasets/test-rollover-deployment-574d6dfbff cd321311-3373-403b-8926-16590553ca5d 1388043 2 2020-03-20 21:46:08 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 083eb9f6-4584-40c8-9e61-6edad7bf86cd 0xc00387b1b7 0xc00387b1b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00387b228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 20 21:46:22.558: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 20 21:46:22.558: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1511 /apis/apps/v1/namespaces/deployment-1511/replicasets/test-rollover-controller a1639f10-5a46-4365-b50b-5718a363f774 1388053 2 2020-03-20 21:45:59 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 083eb9f6-4584-40c8-9e61-6edad7bf86cd 0xc00387b0e7 0xc00387b0e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00387b148 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 20 21:46:22.558: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-1511 /apis/apps/v1/namespaces/deployment-1511/replicasets/test-rollover-deployment-f6c94f66c 2d5795d1-5b61-4aae-891a-c6f0361dc3c1 1387996 2 2020-03-20 21:46:06 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 083eb9f6-4584-40c8-9e61-6edad7bf86cd 0xc00387b290 0xc00387b291}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00387b308 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 20 21:46:22.562: INFO: Pod "test-rollover-deployment-574d6dfbff-jpdwx" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-jpdwx test-rollover-deployment-574d6dfbff- deployment-1511 /api/v1/namespaces/deployment-1511/pods/test-rollover-deployment-574d6dfbff-jpdwx 53118ce5-cec8-4d1d-8962-9672083b799f 1388011 0 2020-03-20 21:46:08 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff cd321311-3373-403b-8926-16590553ca5d 0xc003a26227 0xc003a26228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-28m65,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-28m65,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-28m65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:46:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:46:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:46:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 21:46:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.6,StartTime:2020-03-20 21:46:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-20 21:46:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://e5fac707c5bb8d6aa01caf4145ed39ebbb472d27ec24118c1686b9f957e385e0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:46:22.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1511" for this suite. • [SLOW TEST:23.234 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":129,"skipped":1933,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:46:22.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 20 21:46:22.665: INFO: Waiting up to 5m0s for pod "client-containers-7f880df7-3a98-4e3c-a8ea-a72f87b2bbaf" in namespace "containers-7707" to be "success or failure" Mar 20 21:46:22.668: INFO: Pod "client-containers-7f880df7-3a98-4e3c-a8ea-a72f87b2bbaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.452143ms Mar 20 21:46:24.672: INFO: Pod "client-containers-7f880df7-3a98-4e3c-a8ea-a72f87b2bbaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007063561s Mar 20 21:46:26.676: INFO: Pod "client-containers-7f880df7-3a98-4e3c-a8ea-a72f87b2bbaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011254076s STEP: Saw pod success Mar 20 21:46:26.676: INFO: Pod "client-containers-7f880df7-3a98-4e3c-a8ea-a72f87b2bbaf" satisfied condition "success or failure" Mar 20 21:46:26.680: INFO: Trying to get logs from node jerma-worker2 pod client-containers-7f880df7-3a98-4e3c-a8ea-a72f87b2bbaf container test-container: STEP: delete the pod Mar 20 21:46:26.713: INFO: Waiting for pod client-containers-7f880df7-3a98-4e3c-a8ea-a72f87b2bbaf to disappear Mar 20 21:46:26.717: INFO: Pod client-containers-7f880df7-3a98-4e3c-a8ea-a72f87b2bbaf no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:46:26.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7707" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":1943,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:46:26.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-2ls7 STEP: Creating a pod to test atomic-volume-subpath Mar 20 21:46:26.791: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2ls7" in namespace "subpath-4104" to be "success or failure" Mar 20 21:46:26.837: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Pending", Reason="", readiness=false. Elapsed: 45.971157ms Mar 20 21:46:28.842: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050110839s Mar 20 21:46:30.846: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Running", Reason="", readiness=true. Elapsed: 4.054529708s Mar 20 21:46:32.850: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Running", Reason="", readiness=true. Elapsed: 6.058458328s Mar 20 21:46:34.854: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Running", Reason="", readiness=true. Elapsed: 8.062836906s Mar 20 21:46:36.858: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Running", Reason="", readiness=true. Elapsed: 10.066339118s Mar 20 21:46:38.862: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Running", Reason="", readiness=true. Elapsed: 12.070357856s Mar 20 21:46:40.866: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Running", Reason="", readiness=true. Elapsed: 14.074496492s Mar 20 21:46:42.870: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Running", Reason="", readiness=true. Elapsed: 16.078705237s Mar 20 21:46:44.874: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Running", Reason="", readiness=true. Elapsed: 18.082669829s Mar 20 21:46:46.878: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Running", Reason="", readiness=true. Elapsed: 20.086815495s Mar 20 21:46:48.882: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Running", Reason="", readiness=true. Elapsed: 22.09089985s Mar 20 21:46:51.024: INFO: Pod "pod-subpath-test-configmap-2ls7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.232374387s STEP: Saw pod success Mar 20 21:46:51.024: INFO: Pod "pod-subpath-test-configmap-2ls7" satisfied condition "success or failure" Mar 20 21:46:51.027: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-2ls7 container test-container-subpath-configmap-2ls7: STEP: delete the pod Mar 20 21:46:51.089: INFO: Waiting for pod pod-subpath-test-configmap-2ls7 to disappear Mar 20 21:46:51.105: INFO: Pod pod-subpath-test-configmap-2ls7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-2ls7 Mar 20 21:46:51.105: INFO: Deleting pod "pod-subpath-test-configmap-2ls7" in namespace "subpath-4104" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:46:51.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4104" for this suite. • [SLOW TEST:24.441 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":131,"skipped":1949,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:46:51.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 20 21:46:51.310: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:46:58.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9005" for this suite. • [SLOW TEST:6.957 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":132,"skipped":1953,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:46:58.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3387.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3387.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3387.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3387.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3387.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3387.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3387.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3387.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3387.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3387.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 108.218.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.218.108_udp@PTR;check="$$(dig +tcp +noall +answer +search 108.218.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.218.108_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3387.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3387.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3387.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3387.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3387.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3387.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3387.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3387.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3387.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3387.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3387.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 108.218.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.218.108_udp@PTR;check="$$(dig +tcp +noall +answer +search 108.218.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.218.108_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 20 21:47:04.420: INFO: Unable to read wheezy_udp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:04.423: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:04.426: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:04.429: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:04.451: INFO: Unable to read jessie_udp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:04.454: INFO: Unable to read jessie_tcp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:04.457: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:04.461: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:04.479: INFO: Lookups using dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e failed for: [wheezy_udp@dns-test-service.dns-3387.svc.cluster.local wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local jessie_udp@dns-test-service.dns-3387.svc.cluster.local jessie_tcp@dns-test-service.dns-3387.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local] Mar 20 21:47:09.484: INFO: Unable to read wheezy_udp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:09.488: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:09.491: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:09.494: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:09.561: INFO: Unable to read jessie_udp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:09.564: INFO: Unable to read jessie_tcp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:09.573: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:09.598: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:09.617: INFO: Lookups using dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e failed for: [wheezy_udp@dns-test-service.dns-3387.svc.cluster.local wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local jessie_udp@dns-test-service.dns-3387.svc.cluster.local jessie_tcp@dns-test-service.dns-3387.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local] Mar 20 21:47:14.484: INFO: Unable to read wheezy_udp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:14.488: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:14.491: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:14.494: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:14.514: INFO: Unable to read jessie_udp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:14.517: INFO: Unable to read jessie_tcp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:14.520: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:14.523: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:14.542: INFO: Lookups using dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e failed for: [wheezy_udp@dns-test-service.dns-3387.svc.cluster.local wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local jessie_udp@dns-test-service.dns-3387.svc.cluster.local jessie_tcp@dns-test-service.dns-3387.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local] Mar 20 21:47:19.484: INFO: Unable to read wheezy_udp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:19.488: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:19.492: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:19.495: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:19.519: INFO: Unable to read jessie_udp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:19.522: INFO: Unable to read jessie_tcp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:19.524: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:19.526: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:19.543: INFO: Lookups using dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e failed for: [wheezy_udp@dns-test-service.dns-3387.svc.cluster.local wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local jessie_udp@dns-test-service.dns-3387.svc.cluster.local jessie_tcp@dns-test-service.dns-3387.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local] Mar 20 21:47:24.484: INFO: Unable to read wheezy_udp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:24.488: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:24.491: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:24.494: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:24.516: INFO: Unable to read jessie_udp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:24.519: INFO: Unable to read jessie_tcp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:24.523: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:24.526: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:24.546: INFO: Lookups using dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e failed for: [wheezy_udp@dns-test-service.dns-3387.svc.cluster.local wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local jessie_udp@dns-test-service.dns-3387.svc.cluster.local jessie_tcp@dns-test-service.dns-3387.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local] Mar 20 21:47:29.484: INFO: Unable to read wheezy_udp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:29.487: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:29.491: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:29.494: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:29.515: INFO: Unable to read jessie_udp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:29.517: INFO: Unable to read jessie_tcp@dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:29.520: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:29.523: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local from pod dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e: the server could not find the requested resource (get pods dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e) Mar 20 21:47:29.540: INFO: Lookups using dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e failed for: [wheezy_udp@dns-test-service.dns-3387.svc.cluster.local wheezy_tcp@dns-test-service.dns-3387.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local jessie_udp@dns-test-service.dns-3387.svc.cluster.local jessie_tcp@dns-test-service.dns-3387.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3387.svc.cluster.local] Mar 20 21:47:34.548: INFO: DNS probes using dns-3387/dns-test-65f3640d-b7e7-4944-b5f0-f563ccfd432e succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:47:34.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3387" for this suite. • [SLOW TEST:36.974 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":133,"skipped":1979,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:47:35.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 20 21:47:35.166: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 20 21:47:40.179: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:47:40.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9364" for this suite. • [SLOW TEST:5.170 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":134,"skipped":1996,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:47:40.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:47:40.412: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 20 21:47:42.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3029 create -f -' Mar 20 21:47:45.376: INFO: stderr: "" Mar 20 21:47:45.376: INFO: stdout: "e2e-test-crd-publish-openapi-9058-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 20 21:47:45.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3029 delete e2e-test-crd-publish-openapi-9058-crds test-cr' Mar 20 21:47:45.668: INFO: stderr: "" Mar 20 21:47:45.668: INFO: stdout: "e2e-test-crd-publish-openapi-9058-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 20 21:47:45.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3029 apply -f -' Mar 20 21:47:46.319: INFO: stderr: "" Mar 20 21:47:46.319: INFO: stdout: "e2e-test-crd-publish-openapi-9058-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 20 21:47:46.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3029 delete e2e-test-crd-publish-openapi-9058-crds test-cr' Mar 20 21:47:46.822: INFO: stderr: "" Mar 20 21:47:46.822: INFO: stdout: "e2e-test-crd-publish-openapi-9058-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 20 21:47:46.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9058-crds' Mar 20 21:47:47.078: INFO: stderr: "" Mar 20 21:47:47.078: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9058-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:47:48.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3029" for this suite. • [SLOW TEST:8.678 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":135,"skipped":2011,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:47:48.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-v8lgz in namespace proxy-981 I0320 21:47:49.082110 7 runners.go:189] Created replication controller with name: proxy-service-v8lgz, namespace: proxy-981, replica count: 1 I0320 21:47:50.132545 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0320 21:47:51.132817 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0320 21:47:52.133009 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0320 21:47:53.133225 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0320 21:47:54.133435 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0320 21:47:55.133735 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0320 21:47:56.133983 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0320 21:47:57.134197 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0320 21:47:58.134481 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0320 21:47:59.134703 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0320 21:48:00.134920 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0320 21:48:01.135198 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0320 21:48:02.135420 7 runners.go:189] proxy-service-v8lgz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 20 21:48:02.138: INFO: setup took 13.12106486s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 20 21:48:02.144: INFO: (0) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 5.619959ms) Mar 20 21:48:02.150: INFO: (0) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:460/proxy/: tls baz (200; 10.617205ms) Mar 20 21:48:02.152: INFO: (0) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testtest (200; 18.956841ms) Mar 20 21:48:02.158: INFO: (0) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 19.151965ms) Mar 20 21:48:02.158: INFO: (0) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:1080/proxy/: t... (200; 19.357188ms) Mar 20 21:48:02.159: INFO: (0) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname1/proxy/: foo (200; 19.697942ms) Mar 20 21:48:02.160: INFO: (0) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 20.90893ms) Mar 20 21:48:02.160: INFO: (0) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 20.96867ms) Mar 20 21:48:02.160: INFO: (0) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname2/proxy/: bar (200; 21.241175ms) Mar 20 21:48:02.162: INFO: (0) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname2/proxy/: bar (200; 22.825054ms) Mar 20 21:48:02.162: INFO: (0) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname1/proxy/: foo (200; 23.021888ms) Mar 20 21:48:02.162: INFO: (0) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname1/proxy/: tls baz (200; 23.163457ms) Mar 20 21:48:02.169: INFO: (0) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:443/proxy/: t... (200; 5.212542ms) Mar 20 21:48:02.177: INFO: (1) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:462/proxy/: tls qux (200; 6.63115ms) Mar 20 21:48:02.177: INFO: (1) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 6.488373ms) Mar 20 21:48:02.177: INFO: (1) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 6.531426ms) Mar 20 21:48:02.177: INFO: (1) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b/proxy/: test (200; 6.618318ms) Mar 20 21:48:02.177: INFO: (1) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:460/proxy/: tls baz (200; 6.874094ms) Mar 20 21:48:02.178: INFO: (1) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname2/proxy/: bar (200; 7.591806ms) Mar 20 21:48:02.178: INFO: (1) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:443/proxy/: testtest (200; 2.994586ms) Mar 20 21:48:02.182: INFO: (2) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 3.005584ms) Mar 20 21:48:02.182: INFO: (2) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 2.997253ms) Mar 20 21:48:02.183: INFO: (2) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:462/proxy/: tls qux (200; 3.213033ms) Mar 20 21:48:02.183: INFO: (2) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname1/proxy/: foo (200; 3.626332ms) Mar 20 21:48:02.183: INFO: (2) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 3.584571ms) Mar 20 21:48:02.183: INFO: (2) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname1/proxy/: tls baz (200; 3.701056ms) Mar 20 21:48:02.183: INFO: (2) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:1080/proxy/: t... (200; 3.707189ms) Mar 20 21:48:02.183: INFO: (2) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname2/proxy/: bar (200; 3.857336ms) Mar 20 21:48:02.183: INFO: (2) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:460/proxy/: tls baz (200; 4.107993ms) Mar 20 21:48:02.183: INFO: (2) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname2/proxy/: tls qux (200; 4.094008ms) Mar 20 21:48:02.184: INFO: (2) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname1/proxy/: foo (200; 4.179566ms) Mar 20 21:48:02.184: INFO: (2) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 4.364828ms) Mar 20 21:48:02.184: INFO: (2) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:443/proxy/: testtest (200; 2.097488ms) Mar 20 21:48:02.189: INFO: (3) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 5.182133ms) Mar 20 21:48:02.189: INFO: (3) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 5.622156ms) Mar 20 21:48:02.189: INFO: (3) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:443/proxy/: t... (200; 5.647418ms) Mar 20 21:48:02.190: INFO: (3) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname1/proxy/: tls baz (200; 5.658737ms) Mar 20 21:48:02.190: INFO: (3) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname1/proxy/: foo (200; 5.711173ms) Mar 20 21:48:02.190: INFO: (3) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname2/proxy/: bar (200; 5.680876ms) Mar 20 21:48:02.190: INFO: (3) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname1/proxy/: foo (200; 5.705123ms) Mar 20 21:48:02.190: INFO: (3) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 5.78602ms) Mar 20 21:48:02.190: INFO: (3) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testtest (200; 3.22255ms) Mar 20 21:48:02.193: INFO: (4) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:1080/proxy/: t... (200; 3.186068ms) Mar 20 21:48:02.194: INFO: (4) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testtest (200; 7.07758ms) Mar 20 21:48:02.203: INFO: (5) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:460/proxy/: tls baz (200; 7.1627ms) Mar 20 21:48:02.205: INFO: (5) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:443/proxy/: t... (200; 9.604754ms) Mar 20 21:48:02.225: INFO: (5) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname2/proxy/: bar (200; 29.20682ms) Mar 20 21:48:02.225: INFO: (5) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname1/proxy/: foo (200; 29.338751ms) Mar 20 21:48:02.225: INFO: (5) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 29.291723ms) Mar 20 21:48:02.225: INFO: (5) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 29.497834ms) Mar 20 21:48:02.225: INFO: (5) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname1/proxy/: tls baz (200; 29.489873ms) Mar 20 21:48:02.225: INFO: (5) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 29.77717ms) Mar 20 21:48:02.225: INFO: (5) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 29.769859ms) Mar 20 21:48:02.225: INFO: (5) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname2/proxy/: bar (200; 29.930736ms) Mar 20 21:48:02.225: INFO: (5) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname2/proxy/: tls qux (200; 29.896913ms) Mar 20 21:48:02.225: INFO: (5) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:462/proxy/: tls qux (200; 29.949689ms) Mar 20 21:48:02.227: INFO: (5) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname1/proxy/: foo (200; 31.89425ms) Mar 20 21:48:02.227: INFO: (5) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testtesttest (200; 18.464938ms) Mar 20 21:48:02.246: INFO: (6) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 18.521056ms) Mar 20 21:48:02.246: INFO: (6) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:460/proxy/: tls baz (200; 18.544164ms) Mar 20 21:48:02.246: INFO: (6) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:462/proxy/: tls qux (200; 18.530794ms) Mar 20 21:48:02.246: INFO: (6) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 18.516701ms) Mar 20 21:48:02.246: INFO: (6) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 18.60158ms) Mar 20 21:48:02.246: INFO: (6) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:1080/proxy/: t... (200; 18.571909ms) Mar 20 21:48:02.247: INFO: (6) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname1/proxy/: foo (200; 19.576198ms) Mar 20 21:48:02.247: INFO: (6) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname2/proxy/: bar (200; 19.675251ms) Mar 20 21:48:02.247: INFO: (6) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname2/proxy/: bar (200; 19.682157ms) Mar 20 21:48:02.247: INFO: (6) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname2/proxy/: tls qux (200; 19.752989ms) Mar 20 21:48:02.247: INFO: (6) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname1/proxy/: foo (200; 19.714209ms) Mar 20 21:48:02.247: INFO: (6) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname1/proxy/: tls baz (200; 19.72266ms) Mar 20 21:48:02.252: INFO: (7) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 4.538749ms) Mar 20 21:48:02.252: INFO: (7) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:1080/proxy/: t... (200; 4.605274ms) Mar 20 21:48:02.252: INFO: (7) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 4.676391ms) Mar 20 21:48:02.252: INFO: (7) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname2/proxy/: bar (200; 4.780929ms) Mar 20 21:48:02.252: INFO: (7) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:460/proxy/: tls baz (200; 4.835459ms) Mar 20 21:48:02.252: INFO: (7) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:443/proxy/: test (200; 5.082123ms) Mar 20 21:48:02.253: INFO: (7) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 5.189372ms) Mar 20 21:48:02.253: INFO: (7) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 5.166576ms) Mar 20 21:48:02.254: INFO: (7) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testt... (200; 2.647236ms) Mar 20 21:48:02.257: INFO: (8) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 2.778876ms) Mar 20 21:48:02.257: INFO: (8) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testtest (200; 5.084697ms) Mar 20 21:48:02.259: INFO: (8) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:462/proxy/: tls qux (200; 5.158462ms) Mar 20 21:48:02.259: INFO: (8) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname2/proxy/: tls qux (200; 5.124959ms) Mar 20 21:48:02.259: INFO: (8) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname1/proxy/: foo (200; 5.121547ms) Mar 20 21:48:02.259: INFO: (8) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname2/proxy/: bar (200; 5.319653ms) Mar 20 21:48:02.259: INFO: (8) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 5.411157ms) Mar 20 21:48:02.260: INFO: (8) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 5.596205ms) Mar 20 21:48:02.260: INFO: (8) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:443/proxy/: t... (200; 3.704887ms) Mar 20 21:48:02.264: INFO: (9) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 3.8282ms) Mar 20 21:48:02.264: INFO: (9) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b/proxy/: test (200; 3.970167ms) Mar 20 21:48:02.264: INFO: (9) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 3.935458ms) Mar 20 21:48:02.264: INFO: (9) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testtesttest (200; 5.134946ms) Mar 20 21:48:02.272: INFO: (10) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:1080/proxy/: t... (200; 5.337797ms) Mar 20 21:48:02.272: INFO: (10) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname2/proxy/: bar (200; 5.40644ms) Mar 20 21:48:02.272: INFO: (10) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname2/proxy/: tls qux (200; 5.425726ms) Mar 20 21:48:02.272: INFO: (10) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname1/proxy/: tls baz (200; 5.437652ms) Mar 20 21:48:02.272: INFO: (10) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 5.480049ms) Mar 20 21:48:02.272: INFO: (10) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname2/proxy/: bar (200; 6.089053ms) Mar 20 21:48:02.273: INFO: (10) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname1/proxy/: foo (200; 6.523705ms) Mar 20 21:48:02.273: INFO: (10) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname1/proxy/: foo (200; 6.505996ms) Mar 20 21:48:02.276: INFO: (11) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 3.070976ms) Mar 20 21:48:02.276: INFO: (11) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:460/proxy/: tls baz (200; 3.18126ms) Mar 20 21:48:02.277: INFO: (11) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 4.176919ms) Mar 20 21:48:02.277: INFO: (11) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname1/proxy/: foo (200; 4.297832ms) Mar 20 21:48:02.277: INFO: (11) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname2/proxy/: bar (200; 4.241484ms) Mar 20 21:48:02.277: INFO: (11) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname2/proxy/: bar (200; 4.262322ms) Mar 20 21:48:02.277: INFO: (11) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:443/proxy/: test (200; 4.231465ms) Mar 20 21:48:02.277: INFO: (11) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:1080/proxy/: t... (200; 4.204289ms) Mar 20 21:48:02.277: INFO: (11) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 4.235244ms) Mar 20 21:48:02.277: INFO: (11) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testtesttest (200; 4.082292ms) Mar 20 21:48:02.283: INFO: (12) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:1080/proxy/: t... (200; 4.057173ms) Mar 20 21:48:02.283: INFO: (12) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname1/proxy/: foo (200; 4.471836ms) Mar 20 21:48:02.283: INFO: (12) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname1/proxy/: tls baz (200; 4.523942ms) Mar 20 21:48:02.283: INFO: (12) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname2/proxy/: tls qux (200; 4.594669ms) Mar 20 21:48:02.283: INFO: (12) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname1/proxy/: foo (200; 4.599468ms) Mar 20 21:48:02.287: INFO: (13) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b/proxy/: test (200; 3.295477ms) Mar 20 21:48:02.287: INFO: (13) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 3.756509ms) Mar 20 21:48:02.287: INFO: (13) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:460/proxy/: tls baz (200; 3.802915ms) Mar 20 21:48:02.287: INFO: (13) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:443/proxy/: t... (200; 3.813998ms) Mar 20 21:48:02.287: INFO: (13) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testtest (200; 4.32011ms) Mar 20 21:48:02.292: INFO: (14) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 4.367399ms) Mar 20 21:48:02.292: INFO: (14) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 4.392719ms) Mar 20 21:48:02.292: INFO: (14) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:1080/proxy/: t... (200; 4.482423ms) Mar 20 21:48:02.292: INFO: (14) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 4.465637ms) Mar 20 21:48:02.292: INFO: (14) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testt... (200; 2.971031ms) Mar 20 21:48:02.296: INFO: (15) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:460/proxy/: tls baz (200; 2.964039ms) Mar 20 21:48:02.296: INFO: (15) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testtest (200; 3.728566ms) Mar 20 21:48:02.297: INFO: (15) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 3.807152ms) Mar 20 21:48:02.297: INFO: (15) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:443/proxy/: test (200; 3.897343ms) Mar 20 21:48:02.302: INFO: (16) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:443/proxy/: testt... (200; 4.648956ms) Mar 20 21:48:02.303: INFO: (16) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname1/proxy/: foo (200; 5.18573ms) Mar 20 21:48:02.303: INFO: (16) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname2/proxy/: bar (200; 5.169837ms) Mar 20 21:48:02.303: INFO: (16) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname1/proxy/: foo (200; 5.188314ms) Mar 20 21:48:02.303: INFO: (16) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname2/proxy/: bar (200; 5.194027ms) Mar 20 21:48:02.312: INFO: (17) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testt... (200; 8.66404ms) Mar 20 21:48:02.313: INFO: (17) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:460/proxy/: tls baz (200; 9.179045ms) Mar 20 21:48:02.313: INFO: (17) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 9.376676ms) Mar 20 21:48:02.313: INFO: (17) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:462/proxy/: tls qux (200; 9.498823ms) Mar 20 21:48:02.313: INFO: (17) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname2/proxy/: bar (200; 9.475699ms) Mar 20 21:48:02.313: INFO: (17) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname2/proxy/: bar (200; 9.49301ms) Mar 20 21:48:02.313: INFO: (17) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 9.477506ms) Mar 20 21:48:02.313: INFO: (17) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b/proxy/: test (200; 9.513038ms) Mar 20 21:48:02.313: INFO: (17) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname2/proxy/: tls qux (200; 9.569469ms) Mar 20 21:48:02.313: INFO: (17) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 9.719801ms) Mar 20 21:48:02.313: INFO: (17) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname1/proxy/: tls baz (200; 9.816255ms) Mar 20 21:48:02.313: INFO: (17) /api/v1/namespaces/proxy-981/services/http:proxy-service-v8lgz:portname1/proxy/: foo (200; 9.780919ms) Mar 20 21:48:02.316: INFO: (18) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: testtest (200; 2.429325ms) Mar 20 21:48:02.316: INFO: (18) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:462/proxy/: tls qux (200; 2.499225ms) Mar 20 21:48:02.316: INFO: (18) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:1080/proxy/: t... (200; 2.597807ms) Mar 20 21:48:02.316: INFO: (18) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 2.680375ms) Mar 20 21:48:02.318: INFO: (18) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 4.315028ms) Mar 20 21:48:02.318: INFO: (18) /api/v1/namespaces/proxy-981/services/https:proxy-service-v8lgz:tlsportname2/proxy/: tls qux (200; 4.62813ms) Mar 20 21:48:02.318: INFO: (18) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 4.569144ms) Mar 20 21:48:02.318: INFO: (18) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 4.58041ms) Mar 20 21:48:02.318: INFO: (18) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:443/proxy/: t... (200; 3.778518ms) Mar 20 21:48:02.323: INFO: (19) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b/proxy/: test (200; 3.891245ms) Mar 20 21:48:02.323: INFO: (19) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 4.488493ms) Mar 20 21:48:02.323: INFO: (19) /api/v1/namespaces/proxy-981/pods/http:proxy-service-v8lgz-jc74b:162/proxy/: bar (200; 4.60342ms) Mar 20 21:48:02.323: INFO: (19) /api/v1/namespaces/proxy-981/services/proxy-service-v8lgz:portname2/proxy/: bar (200; 4.833405ms) Mar 20 21:48:02.324: INFO: (19) /api/v1/namespaces/proxy-981/pods/https:proxy-service-v8lgz-jc74b:460/proxy/: tls baz (200; 4.801454ms) Mar 20 21:48:02.324: INFO: (19) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:160/proxy/: foo (200; 4.703481ms) Mar 20 21:48:02.324: INFO: (19) /api/v1/namespaces/proxy-981/pods/proxy-service-v8lgz-jc74b:1080/proxy/: test>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 20 21:48:09.651: INFO: Waiting up to 5m0s for pod "pod-2a8fbd75-d7aa-485e-8c9f-91a61b1ba35a" in namespace "emptydir-4682" to be "success or failure" Mar 20 21:48:09.655: INFO: Pod "pod-2a8fbd75-d7aa-485e-8c9f-91a61b1ba35a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008522ms Mar 20 21:48:11.659: INFO: Pod "pod-2a8fbd75-d7aa-485e-8c9f-91a61b1ba35a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008305675s Mar 20 21:48:13.664: INFO: Pod "pod-2a8fbd75-d7aa-485e-8c9f-91a61b1ba35a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012574471s STEP: Saw pod success Mar 20 21:48:13.664: INFO: Pod "pod-2a8fbd75-d7aa-485e-8c9f-91a61b1ba35a" satisfied condition "success or failure" Mar 20 21:48:13.667: INFO: Trying to get logs from node jerma-worker2 pod pod-2a8fbd75-d7aa-485e-8c9f-91a61b1ba35a container test-container: STEP: delete the pod Mar 20 21:48:13.698: INFO: Waiting for pod pod-2a8fbd75-d7aa-485e-8c9f-91a61b1ba35a to disappear Mar 20 21:48:13.712: INFO: Pod pod-2a8fbd75-d7aa-485e-8c9f-91a61b1ba35a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:48:13.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4682" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2044,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:48:13.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:48:45.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2925" for this suite. • [SLOW TEST:31.557 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2046,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:48:45.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-7d94 STEP: Creating a pod to test atomic-volume-subpath Mar 20 21:48:45.343: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7d94" in namespace "subpath-9969" to be "success or failure" Mar 20 21:48:45.389: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Pending", Reason="", readiness=false. Elapsed: 46.351724ms Mar 20 21:48:47.393: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049903571s Mar 20 21:48:49.397: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Running", Reason="", readiness=true. Elapsed: 4.053749971s Mar 20 21:48:51.400: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Running", Reason="", readiness=true. Elapsed: 6.057127178s Mar 20 21:48:53.404: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Running", Reason="", readiness=true. Elapsed: 8.061068243s Mar 20 21:48:55.408: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Running", Reason="", readiness=true. Elapsed: 10.06474737s Mar 20 21:48:57.412: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Running", Reason="", readiness=true. Elapsed: 12.068893438s Mar 20 21:48:59.418: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Running", Reason="", readiness=true. Elapsed: 14.074863994s Mar 20 21:49:01.422: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Running", Reason="", readiness=true. Elapsed: 16.078877864s Mar 20 21:49:03.426: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Running", Reason="", readiness=true. Elapsed: 18.08346325s Mar 20 21:49:05.431: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Running", Reason="", readiness=true. Elapsed: 20.087525166s Mar 20 21:49:07.434: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Running", Reason="", readiness=true. Elapsed: 22.091420189s Mar 20 21:49:09.438: INFO: Pod "pod-subpath-test-downwardapi-7d94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.095205225s STEP: Saw pod success Mar 20 21:49:09.438: INFO: Pod "pod-subpath-test-downwardapi-7d94" satisfied condition "success or failure" Mar 20 21:49:09.441: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-7d94 container test-container-subpath-downwardapi-7d94: STEP: delete the pod Mar 20 21:49:09.477: INFO: Waiting for pod pod-subpath-test-downwardapi-7d94 to disappear Mar 20 21:49:09.482: INFO: Pod pod-subpath-test-downwardapi-7d94 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-7d94 Mar 20 21:49:09.482: INFO: Deleting pod "pod-subpath-test-downwardapi-7d94" in namespace "subpath-9969" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:49:09.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9969" for this suite. • [SLOW TEST:24.216 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":139,"skipped":2049,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:49:09.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:49:23.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8794" for this suite. • [SLOW TEST:14.135 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":140,"skipped":2076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:49:23.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 20 21:49:28.232: INFO: Successfully updated pod "pod-update-44218a6e-d2da-4eb9-b5eb-31af2cb43107" STEP: verifying the updated pod is in kubernetes Mar 20 21:49:28.240: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:49:28.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-966" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2102,"failed":0} ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:49:28.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 20 21:49:32.315: INFO: Pod pod-hostip-722448bd-d432-4fde-9da4-5c26eb89eb00 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:49:32.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2343" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2102,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:49:32.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-eec19b3e-2933-44c4-9f58-e11262dae54e STEP: Creating a pod to test consume configMaps Mar 20 21:49:32.470: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea51e2b5-e612-4925-a7fc-72b85e0ba23f" in namespace "projected-5427" to be "success or failure" Mar 20 21:49:32.478: INFO: Pod "pod-projected-configmaps-ea51e2b5-e612-4925-a7fc-72b85e0ba23f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.831488ms Mar 20 21:49:34.481: INFO: Pod "pod-projected-configmaps-ea51e2b5-e612-4925-a7fc-72b85e0ba23f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011501517s Mar 20 21:49:36.516: INFO: Pod "pod-projected-configmaps-ea51e2b5-e612-4925-a7fc-72b85e0ba23f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046075665s STEP: Saw pod success Mar 20 21:49:36.516: INFO: Pod "pod-projected-configmaps-ea51e2b5-e612-4925-a7fc-72b85e0ba23f" satisfied condition "success or failure" Mar 20 21:49:36.519: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-ea51e2b5-e612-4925-a7fc-72b85e0ba23f container projected-configmap-volume-test: STEP: delete the pod Mar 20 21:49:36.541: INFO: Waiting for pod pod-projected-configmaps-ea51e2b5-e612-4925-a7fc-72b85e0ba23f to disappear Mar 20 21:49:36.596: INFO: Pod pod-projected-configmaps-ea51e2b5-e612-4925-a7fc-72b85e0ba23f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:49:36.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5427" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2111,"failed":0} SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:49:36.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-8488 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8488 to expose endpoints map[] Mar 20 21:49:36.717: INFO: Get endpoints failed (24.25748ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 20 21:49:37.720: INFO: successfully validated that service endpoint-test2 in namespace services-8488 exposes endpoints map[] (1.027933217s elapsed) STEP: Creating pod pod1 in namespace services-8488 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8488 to expose endpoints map[pod1:[80]] Mar 20 21:49:41.885: INFO: successfully validated that service endpoint-test2 in namespace services-8488 exposes endpoints map[pod1:[80]] (4.158415252s elapsed) STEP: Creating pod pod2 in namespace services-8488 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8488 to expose endpoints map[pod1:[80] pod2:[80]] Mar 20 21:49:44.929: INFO: successfully validated that service endpoint-test2 in namespace services-8488 exposes endpoints map[pod1:[80] pod2:[80]] (3.039459047s elapsed) STEP: Deleting pod pod1 in namespace services-8488 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8488 to expose endpoints map[pod2:[80]] Mar 20 21:49:44.997: INFO: successfully validated that service endpoint-test2 in namespace services-8488 exposes endpoints map[pod2:[80]] (63.026074ms elapsed) STEP: Deleting pod pod2 in namespace services-8488 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8488 to expose endpoints map[] Mar 20 21:49:46.021: INFO: successfully validated that service endpoint-test2 in namespace services-8488 exposes endpoints map[] (1.020977582s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:49:46.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8488" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.549 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":144,"skipped":2113,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:49:46.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 20 21:49:46.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4761' Mar 20 21:49:46.610: INFO: stderr: "" Mar 20 21:49:46.610: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 20 21:49:46.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4761' Mar 20 21:49:46.756: INFO: stderr: "" Mar 20 21:49:46.756: INFO: stdout: "update-demo-nautilus-w2wfl update-demo-nautilus-zlkqg " Mar 20 21:49:46.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w2wfl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4761' Mar 20 21:49:46.911: INFO: stderr: "" Mar 20 21:49:46.911: INFO: stdout: "" Mar 20 21:49:46.911: INFO: update-demo-nautilus-w2wfl is created but not running Mar 20 21:49:51.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4761' Mar 20 21:49:52.020: INFO: stderr: "" Mar 20 21:49:52.020: INFO: stdout: "update-demo-nautilus-w2wfl update-demo-nautilus-zlkqg " Mar 20 21:49:52.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w2wfl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4761' Mar 20 21:49:52.119: INFO: stderr: "" Mar 20 21:49:52.119: INFO: stdout: "true" Mar 20 21:49:52.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w2wfl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4761' Mar 20 21:49:52.213: INFO: stderr: "" Mar 20 21:49:52.213: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 20 21:49:52.213: INFO: validating pod update-demo-nautilus-w2wfl Mar 20 21:49:52.217: INFO: got data: { "image": "nautilus.jpg" } Mar 20 21:49:52.217: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 20 21:49:52.217: INFO: update-demo-nautilus-w2wfl is verified up and running Mar 20 21:49:52.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zlkqg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4761' Mar 20 21:49:52.301: INFO: stderr: "" Mar 20 21:49:52.301: INFO: stdout: "true" Mar 20 21:49:52.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zlkqg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4761' Mar 20 21:49:52.391: INFO: stderr: "" Mar 20 21:49:52.391: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 20 21:49:52.391: INFO: validating pod update-demo-nautilus-zlkqg Mar 20 21:49:52.395: INFO: got data: { "image": "nautilus.jpg" } Mar 20 21:49:52.395: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 20 21:49:52.395: INFO: update-demo-nautilus-zlkqg is verified up and running STEP: scaling down the replication controller Mar 20 21:49:52.398: INFO: scanned /root for discovery docs: Mar 20 21:49:52.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4761' Mar 20 21:49:53.557: INFO: stderr: "" Mar 20 21:49:53.557: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 20 21:49:53.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4761' Mar 20 21:49:53.657: INFO: stderr: "" Mar 20 21:49:53.657: INFO: stdout: "update-demo-nautilus-w2wfl update-demo-nautilus-zlkqg " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 20 21:49:58.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4761' Mar 20 21:49:58.763: INFO: stderr: "" Mar 20 21:49:58.763: INFO: stdout: "update-demo-nautilus-w2wfl update-demo-nautilus-zlkqg " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 20 21:50:03.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4761' Mar 20 21:50:03.855: INFO: stderr: "" Mar 20 21:50:03.855: INFO: stdout: "update-demo-nautilus-zlkqg " Mar 20 21:50:03.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zlkqg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4761' Mar 20 21:50:03.941: INFO: stderr: "" Mar 20 21:50:03.941: INFO: stdout: "true" Mar 20 21:50:03.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zlkqg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4761' Mar 20 21:50:04.026: INFO: stderr: "" Mar 20 21:50:04.026: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 20 21:50:04.026: INFO: validating pod update-demo-nautilus-zlkqg Mar 20 21:50:04.029: INFO: got data: { "image": "nautilus.jpg" } Mar 20 21:50:04.029: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 20 21:50:04.029: INFO: update-demo-nautilus-zlkqg is verified up and running STEP: scaling up the replication controller Mar 20 21:50:04.032: INFO: scanned /root for discovery docs: Mar 20 21:50:04.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4761' Mar 20 21:50:05.152: INFO: stderr: "" Mar 20 21:50:05.152: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 20 21:50:05.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4761' Mar 20 21:50:05.241: INFO: stderr: "" Mar 20 21:50:05.241: INFO: stdout: "update-demo-nautilus-dhrbb update-demo-nautilus-zlkqg " Mar 20 21:50:05.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dhrbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4761' Mar 20 21:50:05.329: INFO: stderr: "" Mar 20 21:50:05.329: INFO: stdout: "" Mar 20 21:50:05.329: INFO: update-demo-nautilus-dhrbb is created but not running Mar 20 21:50:10.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4761' Mar 20 21:50:10.441: INFO: stderr: "" Mar 20 21:50:10.441: INFO: stdout: "update-demo-nautilus-dhrbb update-demo-nautilus-zlkqg " Mar 20 21:50:10.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dhrbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4761' Mar 20 21:50:10.537: INFO: stderr: "" Mar 20 21:50:10.537: INFO: stdout: "true" Mar 20 21:50:10.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dhrbb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4761' Mar 20 21:50:10.626: INFO: stderr: "" Mar 20 21:50:10.626: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 20 21:50:10.626: INFO: validating pod update-demo-nautilus-dhrbb Mar 20 21:50:10.630: INFO: got data: { "image": "nautilus.jpg" } Mar 20 21:50:10.630: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 20 21:50:10.630: INFO: update-demo-nautilus-dhrbb is verified up and running Mar 20 21:50:10.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zlkqg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4761' Mar 20 21:50:10.719: INFO: stderr: "" Mar 20 21:50:10.719: INFO: stdout: "true" Mar 20 21:50:10.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zlkqg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4761' Mar 20 21:50:10.807: INFO: stderr: "" Mar 20 21:50:10.807: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 20 21:50:10.807: INFO: validating pod update-demo-nautilus-zlkqg Mar 20 21:50:10.810: INFO: got data: { "image": "nautilus.jpg" } Mar 20 21:50:10.810: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 20 21:50:10.810: INFO: update-demo-nautilus-zlkqg is verified up and running STEP: using delete to clean up resources Mar 20 21:50:10.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4761' Mar 20 21:50:10.913: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 20 21:50:10.913: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 20 21:50:10.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4761' Mar 20 21:50:11.015: INFO: stderr: "No resources found in kubectl-4761 namespace.\n" Mar 20 21:50:11.015: INFO: stdout: "" Mar 20 21:50:11.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4761 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 20 21:50:11.125: INFO: stderr: "" Mar 20 21:50:11.125: INFO: stdout: "update-demo-nautilus-dhrbb\nupdate-demo-nautilus-zlkqg\n" Mar 20 21:50:11.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4761' Mar 20 21:50:11.730: INFO: stderr: "No resources found in kubectl-4761 namespace.\n" Mar 20 21:50:11.730: INFO: stdout: "" Mar 20 21:50:11.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4761 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 20 21:50:11.822: INFO: stderr: "" Mar 20 21:50:11.822: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:50:11.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4761" for this suite. • [SLOW TEST:25.775 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":145,"skipped":2132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:50:11.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0320 21:50:52.225698 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 20 21:50:52.225: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:50:52.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-435" for this suite. • [SLOW TEST:40.304 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":146,"skipped":2162,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:50:52.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1596 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 20 21:50:52.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5135' Mar 20 21:50:52.455: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 20 21:50:52.455: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1602 Mar 20 21:50:54.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5135' Mar 20 21:50:54.592: INFO: stderr: "" Mar 20 21:50:54.592: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:50:54.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5135" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":147,"skipped":2171,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:50:54.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 20 21:50:54.674: INFO: Waiting up to 5m0s for pod "pod-603dfd45-b18a-48c4-aea4-e882246e9b45" in namespace "emptydir-312" to be "success or failure" Mar 20 21:50:54.679: INFO: Pod "pod-603dfd45-b18a-48c4-aea4-e882246e9b45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.989876ms Mar 20 21:50:56.684: INFO: Pod "pod-603dfd45-b18a-48c4-aea4-e882246e9b45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009655553s Mar 20 21:50:58.733: INFO: Pod "pod-603dfd45-b18a-48c4-aea4-e882246e9b45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058514258s STEP: Saw pod success Mar 20 21:50:58.733: INFO: Pod "pod-603dfd45-b18a-48c4-aea4-e882246e9b45" satisfied condition "success or failure" Mar 20 21:50:58.768: INFO: Trying to get logs from node jerma-worker pod pod-603dfd45-b18a-48c4-aea4-e882246e9b45 container test-container: STEP: delete the pod Mar 20 21:50:58.959: INFO: Waiting for pod pod-603dfd45-b18a-48c4-aea4-e882246e9b45 to disappear Mar 20 21:50:58.965: INFO: Pod pod-603dfd45-b18a-48c4-aea4-e882246e9b45 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:50:58.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-312" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:50:59.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-1046/secret-test-1d844f85-1121-42c6-90b2-fc6a23603f6b STEP: Creating a pod to test consume secrets Mar 20 21:50:59.446: INFO: Waiting up to 5m0s for pod "pod-configmaps-250c101d-19f9-44e4-85db-72410c78d6c2" in namespace "secrets-1046" to be "success or failure" Mar 20 21:50:59.450: INFO: Pod "pod-configmaps-250c101d-19f9-44e4-85db-72410c78d6c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.690277ms Mar 20 21:51:01.475: INFO: Pod "pod-configmaps-250c101d-19f9-44e4-85db-72410c78d6c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028413366s Mar 20 21:51:03.479: INFO: Pod "pod-configmaps-250c101d-19f9-44e4-85db-72410c78d6c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032501542s Mar 20 21:51:05.482: INFO: Pod "pod-configmaps-250c101d-19f9-44e4-85db-72410c78d6c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035365734s STEP: Saw pod success Mar 20 21:51:05.482: INFO: Pod "pod-configmaps-250c101d-19f9-44e4-85db-72410c78d6c2" satisfied condition "success or failure" Mar 20 21:51:05.484: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-250c101d-19f9-44e4-85db-72410c78d6c2 container env-test: STEP: delete the pod Mar 20 21:51:05.519: INFO: Waiting for pod pod-configmaps-250c101d-19f9-44e4-85db-72410c78d6c2 to disappear Mar 20 21:51:05.545: INFO: Pod pod-configmaps-250c101d-19f9-44e4-85db-72410c78d6c2 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:51:05.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1046" for this suite. • [SLOW TEST:6.531 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2222,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:51:05.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1632 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 20 21:51:05.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9145' Mar 20 21:51:05.681: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 20 21:51:05.681: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 20 21:51:05.719: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-56dvx] Mar 20 21:51:05.719: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-56dvx" in namespace "kubectl-9145" to be "running and ready" Mar 20 21:51:05.734: INFO: Pod "e2e-test-httpd-rc-56dvx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.138522ms Mar 20 21:51:07.737: INFO: Pod "e2e-test-httpd-rc-56dvx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017735254s Mar 20 21:51:09.741: INFO: Pod "e2e-test-httpd-rc-56dvx": Phase="Running", Reason="", readiness=true. Elapsed: 4.021957938s Mar 20 21:51:09.741: INFO: Pod "e2e-test-httpd-rc-56dvx" satisfied condition "running and ready" Mar 20 21:51:09.741: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-56dvx] Mar 20 21:51:09.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-9145' Mar 20 21:51:09.864: INFO: stderr: "" Mar 20 21:51:09.864: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.32. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.32. Set the 'ServerName' directive globally to suppress this message\n[Fri Mar 20 21:51:07.910586 2020] [mpm_event:notice] [pid 1:tid 140499064261480] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Mar 20 21:51:07.910643 2020] [core:notice] [pid 1:tid 140499064261480] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1637 Mar 20 21:51:09.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9145' Mar 20 21:51:09.963: INFO: stderr: "" Mar 20 21:51:09.963: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:51:09.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9145" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":150,"skipped":2233,"failed":0} S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:51:09.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-b3cd9717-156f-4fee-92d0-650328e42aa5 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b3cd9717-156f-4fee-92d0-650328e42aa5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:52:30.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2611" for this suite. • [SLOW TEST:80.767 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2234,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:52:30.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 21:52:32.057: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 21:52:34.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337952, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337952, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337952, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720337952, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 21:52:37.258: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:52:37.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2160" for this suite. STEP: Destroying namespace "webhook-2160-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.238 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":152,"skipped":2252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:52:37.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 20 21:52:38.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5406 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 20 21:52:41.348: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0320 21:52:41.275662 1780 log.go:172] (0xc000b409a0) (0xc0006459a0) Create stream\nI0320 21:52:41.275726 1780 log.go:172] (0xc000b409a0) (0xc0006459a0) Stream added, broadcasting: 1\nI0320 21:52:41.279519 1780 log.go:172] (0xc000b409a0) Reply frame received for 1\nI0320 21:52:41.279566 1780 log.go:172] (0xc000b409a0) (0xc000836000) Create stream\nI0320 21:52:41.279581 1780 log.go:172] (0xc000b409a0) (0xc000836000) Stream added, broadcasting: 3\nI0320 21:52:41.280807 1780 log.go:172] (0xc000b409a0) Reply frame received for 3\nI0320 21:52:41.280850 1780 log.go:172] (0xc000b409a0) (0xc000a2e140) Create stream\nI0320 21:52:41.280867 1780 log.go:172] (0xc000b409a0) (0xc000a2e140) Stream added, broadcasting: 5\nI0320 21:52:41.283374 1780 log.go:172] (0xc000b409a0) Reply frame received for 5\nI0320 21:52:41.283414 1780 log.go:172] (0xc000b409a0) (0xc000645a40) Create stream\nI0320 21:52:41.283428 1780 log.go:172] (0xc000b409a0) (0xc000645a40) Stream added, broadcasting: 7\nI0320 21:52:41.284232 1780 log.go:172] (0xc000b409a0) Reply frame received for 7\nI0320 21:52:41.284478 1780 log.go:172] (0xc000836000) (3) Writing data frame\nI0320 21:52:41.284620 1780 log.go:172] (0xc000836000) (3) Writing data frame\nI0320 21:52:41.285444 1780 log.go:172] (0xc000b409a0) Data frame received for 5\nI0320 21:52:41.285469 1780 log.go:172] (0xc000a2e140) (5) Data frame handling\nI0320 21:52:41.285493 1780 log.go:172] (0xc000a2e140) (5) Data frame sent\nI0320 21:52:41.286035 1780 log.go:172] (0xc000b409a0) Data frame received for 5\nI0320 21:52:41.286053 1780 log.go:172] (0xc000a2e140) (5) Data frame handling\nI0320 21:52:41.286072 1780 log.go:172] (0xc000a2e140) (5) Data frame sent\nI0320 21:52:41.315666 1780 log.go:172] (0xc000b409a0) Data frame received for 5\nI0320 21:52:41.315704 1780 log.go:172] (0xc000a2e140) (5) Data frame handling\nI0320 21:52:41.315732 1780 log.go:172] (0xc000b409a0) Data frame received for 7\nI0320 21:52:41.315750 1780 log.go:172] (0xc000645a40) (7) Data frame handling\nI0320 21:52:41.316078 1780 log.go:172] (0xc000b409a0) Data frame received for 1\nI0320 21:52:41.316126 1780 log.go:172] (0xc0006459a0) (1) Data frame handling\nI0320 21:52:41.316161 1780 log.go:172] (0xc0006459a0) (1) Data frame sent\nI0320 21:52:41.316274 1780 log.go:172] (0xc000b409a0) (0xc000836000) Stream removed, broadcasting: 3\nI0320 21:52:41.316346 1780 log.go:172] (0xc000b409a0) (0xc0006459a0) Stream removed, broadcasting: 1\nI0320 21:52:41.316379 1780 log.go:172] (0xc000b409a0) Go away received\nI0320 21:52:41.316732 1780 log.go:172] (0xc000b409a0) (0xc0006459a0) Stream removed, broadcasting: 1\nI0320 21:52:41.316758 1780 log.go:172] (0xc000b409a0) (0xc000836000) Stream removed, broadcasting: 3\nI0320 21:52:41.316769 1780 log.go:172] (0xc000b409a0) (0xc000a2e140) Stream removed, broadcasting: 5\nI0320 21:52:41.316781 1780 log.go:172] (0xc000b409a0) (0xc000645a40) Stream removed, broadcasting: 7\n" Mar 20 21:52:41.348: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:52:43.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5406" for this suite. • [SLOW TEST:5.387 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1944 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":153,"skipped":2291,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:52:43.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-f0f7688d-dba4-4e3a-b21c-eab2eff8e0d0 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:52:47.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2277" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2294,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:52:47.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2741 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2741 STEP: creating replication controller externalsvc in namespace services-2741 I0320 21:52:47.682232 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2741, replica count: 2 I0320 21:52:50.732794 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0320 21:52:53.733049 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 20 21:52:53.784: INFO: Creating new exec pod Mar 20 21:52:57.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2741 execpodw9j9s -- /bin/sh -x -c nslookup clusterip-service' Mar 20 21:52:58.037: INFO: stderr: "I0320 21:52:57.962569 1804 log.go:172] (0xc0000f4b00) (0xc000629f40) Create stream\nI0320 21:52:57.962626 1804 log.go:172] (0xc0000f4b00) (0xc000629f40) Stream added, broadcasting: 1\nI0320 21:52:57.964765 1804 log.go:172] (0xc0000f4b00) Reply frame received for 1\nI0320 21:52:57.964795 1804 log.go:172] (0xc0000f4b00) (0xc0005de820) Create stream\nI0320 21:52:57.964807 1804 log.go:172] (0xc0000f4b00) (0xc0005de820) Stream added, broadcasting: 3\nI0320 21:52:57.965838 1804 log.go:172] (0xc0000f4b00) Reply frame received for 3\nI0320 21:52:57.965913 1804 log.go:172] (0xc0000f4b00) (0xc00070b5e0) Create stream\nI0320 21:52:57.965942 1804 log.go:172] (0xc0000f4b00) (0xc00070b5e0) Stream added, broadcasting: 5\nI0320 21:52:57.966919 1804 log.go:172] (0xc0000f4b00) Reply frame received for 5\nI0320 21:52:58.019515 1804 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0320 21:52:58.019540 1804 log.go:172] (0xc00070b5e0) (5) Data frame handling\nI0320 21:52:58.019560 1804 log.go:172] (0xc00070b5e0) (5) Data frame sent\n+ nslookup clusterip-service\nI0320 21:52:58.028977 1804 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0320 21:52:58.029005 1804 log.go:172] (0xc0005de820) (3) Data frame handling\nI0320 21:52:58.029022 1804 log.go:172] (0xc0005de820) (3) Data frame sent\nI0320 21:52:58.030268 1804 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0320 21:52:58.030286 1804 log.go:172] (0xc0005de820) (3) Data frame handling\nI0320 21:52:58.030299 1804 log.go:172] (0xc0005de820) (3) Data frame sent\nI0320 21:52:58.030984 1804 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0320 21:52:58.031004 1804 log.go:172] (0xc0005de820) (3) Data frame handling\nI0320 21:52:58.031153 1804 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0320 21:52:58.031168 1804 log.go:172] (0xc00070b5e0) (5) Data frame handling\nI0320 21:52:58.032743 1804 log.go:172] (0xc0000f4b00) Data frame received for 1\nI0320 21:52:58.032832 1804 log.go:172] (0xc000629f40) (1) Data frame handling\nI0320 21:52:58.032880 1804 log.go:172] (0xc000629f40) (1) Data frame sent\nI0320 21:52:58.032901 1804 log.go:172] (0xc0000f4b00) (0xc000629f40) Stream removed, broadcasting: 1\nI0320 21:52:58.033226 1804 log.go:172] (0xc0000f4b00) Go away received\nI0320 21:52:58.033392 1804 log.go:172] (0xc0000f4b00) (0xc000629f40) Stream removed, broadcasting: 1\nI0320 21:52:58.033430 1804 log.go:172] (0xc0000f4b00) (0xc0005de820) Stream removed, broadcasting: 3\nI0320 21:52:58.033447 1804 log.go:172] (0xc0000f4b00) (0xc00070b5e0) Stream removed, broadcasting: 5\n" Mar 20 21:52:58.037: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2741.svc.cluster.local\tcanonical name = externalsvc.services-2741.svc.cluster.local.\nName:\texternalsvc.services-2741.svc.cluster.local\nAddress: 10.103.192.42\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2741, will wait for the garbage collector to delete the pods Mar 20 21:52:58.097: INFO: Deleting ReplicationController externalsvc took: 7.14518ms Mar 20 21:52:58.197: INFO: Terminating ReplicationController externalsvc pods took: 100.242426ms Mar 20 21:53:09.335: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:53:09.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2741" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:21.860 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":155,"skipped":2306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:53:09.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 20 21:53:09.438: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:53:16.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6875" for this suite. • [SLOW TEST:7.535 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":156,"skipped":2329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:53:16.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 20 21:53:16.996: INFO: Waiting up to 5m0s for pod "client-containers-868a55ff-86c3-49bd-a79e-cfc08869dc27" in namespace "containers-4757" to be "success or failure" Mar 20 21:53:16.999: INFO: Pod "client-containers-868a55ff-86c3-49bd-a79e-cfc08869dc27": Phase="Pending", Reason="", readiness=false. Elapsed: 3.219695ms Mar 20 21:53:19.003: INFO: Pod "client-containers-868a55ff-86c3-49bd-a79e-cfc08869dc27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006618327s Mar 20 21:53:21.007: INFO: Pod "client-containers-868a55ff-86c3-49bd-a79e-cfc08869dc27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010956561s STEP: Saw pod success Mar 20 21:53:21.007: INFO: Pod "client-containers-868a55ff-86c3-49bd-a79e-cfc08869dc27" satisfied condition "success or failure" Mar 20 21:53:21.010: INFO: Trying to get logs from node jerma-worker2 pod client-containers-868a55ff-86c3-49bd-a79e-cfc08869dc27 container test-container: STEP: delete the pod Mar 20 21:53:21.025: INFO: Waiting for pod client-containers-868a55ff-86c3-49bd-a79e-cfc08869dc27 to disappear Mar 20 21:53:21.029: INFO: Pod client-containers-868a55ff-86c3-49bd-a79e-cfc08869dc27 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:53:21.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4757" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2360,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:53:21.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 20 21:53:21.154: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:53:37.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3295" for this suite. • [SLOW TEST:16.526 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":158,"skipped":2406,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:53:37.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6085.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6085.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 20 21:53:43.731: INFO: DNS probes using dns-6085/dns-test-12e95ffc-0004-40d1-9589-5623d1fb60f7 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:53:43.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6085" for this suite. • [SLOW TEST:6.216 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":159,"skipped":2429,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:53:43.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:53:49.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1595" for this suite. • [SLOW TEST:5.301 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":160,"skipped":2450,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:53:49.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 21:53:49.665: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 21:53:51.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338029, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338029, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338029, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338029, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 21:53:53.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338029, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338029, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338029, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338029, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 21:53:56.716: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:53:56.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2850" for this suite. STEP: Destroying namespace "webhook-2850-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.718 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":161,"skipped":2461,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:53:56.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7499 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-7499 Mar 20 21:53:56.963: INFO: Found 0 stateful pods, waiting for 1 Mar 20 21:54:06.967: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 20 21:54:06.995: INFO: Deleting all statefulset in ns statefulset-7499 Mar 20 21:54:07.001: INFO: Scaling statefulset ss to 0 Mar 20 21:54:37.101: INFO: Waiting for statefulset status.replicas updated to 0 Mar 20 21:54:37.104: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:54:37.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7499" for this suite. • [SLOW TEST:40.310 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":162,"skipped":2479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:54:37.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 20 21:54:37.201: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2786 /api/v1/namespaces/watch-2786/configmaps/e2e-watch-test-watch-closed 3e3b3d7d-ef01-4d4f-9d6e-6c5d73024542 1391092 0 2020-03-20 21:54:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 20 21:54:37.201: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2786 /api/v1/namespaces/watch-2786/configmaps/e2e-watch-test-watch-closed 3e3b3d7d-ef01-4d4f-9d6e-6c5d73024542 1391093 0 2020-03-20 21:54:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 20 21:54:37.246: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2786 /api/v1/namespaces/watch-2786/configmaps/e2e-watch-test-watch-closed 3e3b3d7d-ef01-4d4f-9d6e-6c5d73024542 1391094 0 2020-03-20 21:54:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 20 21:54:37.246: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2786 /api/v1/namespaces/watch-2786/configmaps/e2e-watch-test-watch-closed 3e3b3d7d-ef01-4d4f-9d6e-6c5d73024542 1391095 0 2020-03-20 21:54:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:54:37.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2786" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":163,"skipped":2513,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:54:37.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 20 21:54:41.335: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6007 PodName:pod-sharedvolume-7ffc4fa8-bd7a-4e43-ba17-e56494da3ec8 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 21:54:41.335: INFO: >>> kubeConfig: /root/.kube/config I0320 21:54:41.366632 7 log.go:172] (0xc0027706e0) (0xc000d59180) Create stream I0320 21:54:41.366670 7 log.go:172] (0xc0027706e0) (0xc000d59180) Stream added, broadcasting: 1 I0320 21:54:41.369043 7 log.go:172] (0xc0027706e0) Reply frame received for 1 I0320 21:54:41.369103 7 log.go:172] (0xc0027706e0) (0xc000d594a0) Create stream I0320 21:54:41.369283 7 log.go:172] (0xc0027706e0) (0xc000d594a0) Stream added, broadcasting: 3 I0320 21:54:41.370291 7 log.go:172] (0xc0027706e0) Reply frame received for 3 I0320 21:54:41.370319 7 log.go:172] (0xc0027706e0) (0xc001092460) Create stream I0320 21:54:41.370330 7 log.go:172] (0xc0027706e0) (0xc001092460) Stream added, broadcasting: 5 I0320 21:54:41.371181 7 log.go:172] (0xc0027706e0) Reply frame received for 5 I0320 21:54:41.423973 7 log.go:172] (0xc0027706e0) Data frame received for 5 I0320 21:54:41.424013 7 log.go:172] (0xc001092460) (5) Data frame handling I0320 21:54:41.424038 7 log.go:172] (0xc0027706e0) Data frame received for 3 I0320 21:54:41.424052 7 log.go:172] (0xc000d594a0) (3) Data frame handling I0320 21:54:41.424067 7 log.go:172] (0xc000d594a0) (3) Data frame sent I0320 21:54:41.424081 7 log.go:172] (0xc0027706e0) Data frame received for 3 I0320 21:54:41.424095 7 log.go:172] (0xc000d594a0) (3) Data frame handling I0320 21:54:41.426075 7 log.go:172] (0xc0027706e0) Data frame received for 1 I0320 21:54:41.426113 7 log.go:172] (0xc000d59180) (1) Data frame handling I0320 21:54:41.426150 7 log.go:172] (0xc000d59180) (1) Data frame sent I0320 21:54:41.426180 7 log.go:172] (0xc0027706e0) (0xc000d59180) Stream removed, broadcasting: 1 I0320 21:54:41.426259 7 log.go:172] (0xc0027706e0) Go away received I0320 21:54:41.426307 7 log.go:172] (0xc0027706e0) (0xc000d59180) Stream removed, broadcasting: 1 I0320 21:54:41.426349 7 log.go:172] (0xc0027706e0) (0xc000d594a0) Stream removed, broadcasting: 3 I0320 21:54:41.426377 7 log.go:172] (0xc0027706e0) (0xc001092460) Stream removed, broadcasting: 5 Mar 20 21:54:41.426: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:54:41.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6007" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":164,"skipped":2522,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:54:41.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 21:54:41.519: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6105d011-ba71-4674-8027-fa0b5c7bf89c" in namespace "downward-api-4351" to be "success or failure" Mar 20 21:54:41.522: INFO: Pod "downwardapi-volume-6105d011-ba71-4674-8027-fa0b5c7bf89c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.877347ms Mar 20 21:54:43.527: INFO: Pod "downwardapi-volume-6105d011-ba71-4674-8027-fa0b5c7bf89c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008259418s Mar 20 21:54:45.531: INFO: Pod "downwardapi-volume-6105d011-ba71-4674-8027-fa0b5c7bf89c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012134275s STEP: Saw pod success Mar 20 21:54:45.531: INFO: Pod "downwardapi-volume-6105d011-ba71-4674-8027-fa0b5c7bf89c" satisfied condition "success or failure" Mar 20 21:54:45.534: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6105d011-ba71-4674-8027-fa0b5c7bf89c container client-container: STEP: delete the pod Mar 20 21:54:45.554: INFO: Waiting for pod downwardapi-volume-6105d011-ba71-4674-8027-fa0b5c7bf89c to disappear Mar 20 21:54:45.558: INFO: Pod downwardapi-volume-6105d011-ba71-4674-8027-fa0b5c7bf89c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:54:45.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4351" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2540,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:54:45.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:54:45.643: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:54:46.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8814" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":166,"skipped":2550,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:54:46.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1692 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 20 21:54:46.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6290' Mar 20 21:54:46.423: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 20 21:54:46.423: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Mar 20 21:54:46.435: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Mar 20 21:54:46.451: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 20 21:54:46.459: INFO: scanned /root for discovery docs: Mar 20 21:54:46.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6290' Mar 20 21:55:02.312: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 20 21:55:02.312: INFO: stdout: "Created e2e-test-httpd-rc-69376b3665c3541019bd92f210e64be1\nScaling up e2e-test-httpd-rc-69376b3665c3541019bd92f210e64be1 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-69376b3665c3541019bd92f210e64be1 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-69376b3665c3541019bd92f210e64be1 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 20 21:55:02.312: INFO: stdout: "Created e2e-test-httpd-rc-69376b3665c3541019bd92f210e64be1\nScaling up e2e-test-httpd-rc-69376b3665c3541019bd92f210e64be1 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-69376b3665c3541019bd92f210e64be1 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-69376b3665c3541019bd92f210e64be1 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 20 21:55:02.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-6290' Mar 20 21:55:02.405: INFO: stderr: "" Mar 20 21:55:02.405: INFO: stdout: "e2e-test-httpd-rc-69376b3665c3541019bd92f210e64be1-pcjxh " Mar 20 21:55:02.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-69376b3665c3541019bd92f210e64be1-pcjxh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6290' Mar 20 21:55:02.496: INFO: stderr: "" Mar 20 21:55:02.496: INFO: stdout: "true" Mar 20 21:55:02.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-69376b3665c3541019bd92f210e64be1-pcjxh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6290' Mar 20 21:55:02.598: INFO: stderr: "" Mar 20 21:55:02.598: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 20 21:55:02.598: INFO: e2e-test-httpd-rc-69376b3665c3541019bd92f210e64be1-pcjxh is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1698 Mar 20 21:55:02.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6290' Mar 20 21:55:02.731: INFO: stderr: "" Mar 20 21:55:02.731: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:55:02.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6290" for this suite. • [SLOW TEST:16.500 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1687 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":167,"skipped":2558,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:55:02.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-657e6e08-44a9-4a07-adb7-f24d9d33fa08 STEP: Creating a pod to test consume configMaps Mar 20 21:55:02.812: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6617a7a-9a35-46e4-b01f-49294385a152" in namespace "configmap-9406" to be "success or failure" Mar 20 21:55:02.849: INFO: Pod "pod-configmaps-f6617a7a-9a35-46e4-b01f-49294385a152": Phase="Pending", Reason="", readiness=false. Elapsed: 37.28463ms Mar 20 21:55:04.853: INFO: Pod "pod-configmaps-f6617a7a-9a35-46e4-b01f-49294385a152": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041340428s Mar 20 21:55:06.858: INFO: Pod "pod-configmaps-f6617a7a-9a35-46e4-b01f-49294385a152": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045434648s STEP: Saw pod success Mar 20 21:55:06.858: INFO: Pod "pod-configmaps-f6617a7a-9a35-46e4-b01f-49294385a152" satisfied condition "success or failure" Mar 20 21:55:06.867: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f6617a7a-9a35-46e4-b01f-49294385a152 container configmap-volume-test: STEP: delete the pod Mar 20 21:55:06.883: INFO: Waiting for pod pod-configmaps-f6617a7a-9a35-46e4-b01f-49294385a152 to disappear Mar 20 21:55:06.888: INFO: Pod pod-configmaps-f6617a7a-9a35-46e4-b01f-49294385a152 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:55:06.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9406" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2559,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:55:06.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:55:07.004: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 20 21:55:09.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8657 create -f -' Mar 20 21:55:13.372: INFO: stderr: "" Mar 20 21:55:13.372: INFO: stdout: "e2e-test-crd-publish-openapi-1160-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 20 21:55:13.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8657 delete e2e-test-crd-publish-openapi-1160-crds test-cr' Mar 20 21:55:13.473: INFO: stderr: "" Mar 20 21:55:13.473: INFO: stdout: "e2e-test-crd-publish-openapi-1160-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 20 21:55:13.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8657 apply -f -' Mar 20 21:55:13.717: INFO: stderr: "" Mar 20 21:55:13.717: INFO: stdout: "e2e-test-crd-publish-openapi-1160-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 20 21:55:13.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8657 delete e2e-test-crd-publish-openapi-1160-crds test-cr' Mar 20 21:55:13.813: INFO: stderr: "" Mar 20 21:55:13.813: INFO: stdout: "e2e-test-crd-publish-openapi-1160-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 20 21:55:13.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1160-crds' Mar 20 21:55:14.029: INFO: stderr: "" Mar 20 21:55:14.029: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1160-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:55:15.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8657" for this suite. • [SLOW TEST:9.000 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":169,"skipped":2567,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:55:15.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 20 21:55:15.956: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 20 21:55:15.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3158' Mar 20 21:55:16.272: INFO: stderr: "" Mar 20 21:55:16.272: INFO: stdout: "service/agnhost-slave created\n" Mar 20 21:55:16.272: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 20 21:55:16.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3158' Mar 20 21:55:16.534: INFO: stderr: "" Mar 20 21:55:16.534: INFO: stdout: "service/agnhost-master created\n" Mar 20 21:55:16.534: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 20 21:55:16.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3158' Mar 20 21:55:16.779: INFO: stderr: "" Mar 20 21:55:16.779: INFO: stdout: "service/frontend created\n" Mar 20 21:55:16.780: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 20 21:55:16.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3158' Mar 20 21:55:17.026: INFO: stderr: "" Mar 20 21:55:17.026: INFO: stdout: "deployment.apps/frontend created\n" Mar 20 21:55:17.026: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 20 21:55:17.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3158' Mar 20 21:55:17.263: INFO: stderr: "" Mar 20 21:55:17.263: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 20 21:55:17.263: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 20 21:55:17.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3158' Mar 20 21:55:17.516: INFO: stderr: "" Mar 20 21:55:17.516: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 20 21:55:17.516: INFO: Waiting for all frontend pods to be Running. Mar 20 21:55:27.567: INFO: Waiting for frontend to serve content. Mar 20 21:55:27.577: INFO: Trying to add a new entry to the guestbook. Mar 20 21:55:27.586: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 20 21:55:27.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3158' Mar 20 21:55:27.731: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 20 21:55:27.731: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 20 21:55:27.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3158' Mar 20 21:55:27.926: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 20 21:55:27.926: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 20 21:55:27.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3158' Mar 20 21:55:28.061: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 20 21:55:28.061: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 20 21:55:28.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3158' Mar 20 21:55:28.162: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 20 21:55:28.162: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 20 21:55:28.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3158' Mar 20 21:55:28.270: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 20 21:55:28.270: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 20 21:55:28.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3158' Mar 20 21:55:28.674: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 20 21:55:28.674: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:55:28.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3158" for this suite. • [SLOW TEST:12.835 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:386 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":170,"skipped":2574,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:55:28.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 21:55:29.030: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39fa3891-8389-4e65-899a-b20822095cc0" in namespace "projected-2419" to be "success or failure" Mar 20 21:55:29.156: INFO: Pod "downwardapi-volume-39fa3891-8389-4e65-899a-b20822095cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 125.117039ms Mar 20 21:55:31.159: INFO: Pod "downwardapi-volume-39fa3891-8389-4e65-899a-b20822095cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128592898s Mar 20 21:55:33.163: INFO: Pod "downwardapi-volume-39fa3891-8389-4e65-899a-b20822095cc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132828226s STEP: Saw pod success Mar 20 21:55:33.163: INFO: Pod "downwardapi-volume-39fa3891-8389-4e65-899a-b20822095cc0" satisfied condition "success or failure" Mar 20 21:55:33.167: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-39fa3891-8389-4e65-899a-b20822095cc0 container client-container: STEP: delete the pod Mar 20 21:55:33.200: INFO: Waiting for pod downwardapi-volume-39fa3891-8389-4e65-899a-b20822095cc0 to disappear Mar 20 21:55:33.227: INFO: Pod downwardapi-volume-39fa3891-8389-4e65-899a-b20822095cc0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:55:33.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2419" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2584,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:55:33.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 20 21:55:37.810: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5728 pod-service-account-ac2f6391-ffac-4787-9e2f-8f5afe282368 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 20 21:55:38.027: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5728 pod-service-account-ac2f6391-ffac-4787-9e2f-8f5afe282368 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 20 21:55:38.231: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5728 pod-service-account-ac2f6391-ffac-4787-9e2f-8f5afe282368 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:55:38.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5728" for this suite. • [SLOW TEST:5.248 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":172,"skipped":2612,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:55:38.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 20 21:55:38.545: INFO: Waiting up to 5m0s for pod "pod-c9b52a91-4e88-4bf1-83e4-372a15098ddc" in namespace "emptydir-9259" to be "success or failure" Mar 20 21:55:38.549: INFO: Pod "pod-c9b52a91-4e88-4bf1-83e4-372a15098ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.756296ms Mar 20 21:55:40.554: INFO: Pod "pod-c9b52a91-4e88-4bf1-83e4-372a15098ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008304488s Mar 20 21:55:42.558: INFO: Pod "pod-c9b52a91-4e88-4bf1-83e4-372a15098ddc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012459033s STEP: Saw pod success Mar 20 21:55:42.558: INFO: Pod "pod-c9b52a91-4e88-4bf1-83e4-372a15098ddc" satisfied condition "success or failure" Mar 20 21:55:42.560: INFO: Trying to get logs from node jerma-worker2 pod pod-c9b52a91-4e88-4bf1-83e4-372a15098ddc container test-container: STEP: delete the pod Mar 20 21:55:42.602: INFO: Waiting for pod pod-c9b52a91-4e88-4bf1-83e4-372a15098ddc to disappear Mar 20 21:55:42.622: INFO: Pod pod-c9b52a91-4e88-4bf1-83e4-372a15098ddc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:55:42.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9259" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2614,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:55:42.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 21:55:42.720: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66927ee7-32f6-4f19-a70c-39ca775e8914" in namespace "projected-2256" to be "success or failure" Mar 20 21:55:42.723: INFO: Pod "downwardapi-volume-66927ee7-32f6-4f19-a70c-39ca775e8914": Phase="Pending", Reason="", readiness=false. Elapsed: 3.401979ms Mar 20 21:55:44.727: INFO: Pod "downwardapi-volume-66927ee7-32f6-4f19-a70c-39ca775e8914": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007101973s Mar 20 21:55:46.731: INFO: Pod "downwardapi-volume-66927ee7-32f6-4f19-a70c-39ca775e8914": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011280676s STEP: Saw pod success Mar 20 21:55:46.731: INFO: Pod "downwardapi-volume-66927ee7-32f6-4f19-a70c-39ca775e8914" satisfied condition "success or failure" Mar 20 21:55:46.734: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-66927ee7-32f6-4f19-a70c-39ca775e8914 container client-container: STEP: delete the pod Mar 20 21:55:46.755: INFO: Waiting for pod downwardapi-volume-66927ee7-32f6-4f19-a70c-39ca775e8914 to disappear Mar 20 21:55:46.775: INFO: Pod downwardapi-volume-66927ee7-32f6-4f19-a70c-39ca775e8914 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:55:46.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2256" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:55:46.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-56d334ca-50b3-4a68-af17-bb8ada782657 STEP: Creating a pod to test consume secrets Mar 20 21:55:46.875: INFO: Waiting up to 5m0s for pod "pod-secrets-01f0f6c5-9266-41e7-9a0e-a1db57ef5cc3" in namespace "secrets-9788" to be "success or failure" Mar 20 21:55:46.885: INFO: Pod "pod-secrets-01f0f6c5-9266-41e7-9a0e-a1db57ef5cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205199ms Mar 20 21:55:48.888: INFO: Pod "pod-secrets-01f0f6c5-9266-41e7-9a0e-a1db57ef5cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013730839s Mar 20 21:55:50.898: INFO: Pod "pod-secrets-01f0f6c5-9266-41e7-9a0e-a1db57ef5cc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023676231s STEP: Saw pod success Mar 20 21:55:50.898: INFO: Pod "pod-secrets-01f0f6c5-9266-41e7-9a0e-a1db57ef5cc3" satisfied condition "success or failure" Mar 20 21:55:50.901: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-01f0f6c5-9266-41e7-9a0e-a1db57ef5cc3 container secret-env-test: STEP: delete the pod Mar 20 21:55:50.916: INFO: Waiting for pod pod-secrets-01f0f6c5-9266-41e7-9a0e-a1db57ef5cc3 to disappear Mar 20 21:55:50.920: INFO: Pod pod-secrets-01f0f6c5-9266-41e7-9a0e-a1db57ef5cc3 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:55:50.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9788" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2663,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:55:50.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:55:51.018: INFO: Create a RollingUpdate DaemonSet Mar 20 21:55:51.022: INFO: Check that daemon pods launch on every node of the cluster Mar 20 21:55:51.035: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:55:51.046: INFO: Number of nodes with available pods: 0 Mar 20 21:55:51.046: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:55:52.066: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:55:52.068: INFO: Number of nodes with available pods: 0 Mar 20 21:55:52.068: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:55:53.095: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:55:53.099: INFO: Number of nodes with available pods: 0 Mar 20 21:55:53.099: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:55:54.051: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:55:54.055: INFO: Number of nodes with available pods: 0 Mar 20 21:55:54.055: INFO: Node jerma-worker is running more than one daemon pod Mar 20 21:55:55.061: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:55:55.077: INFO: Number of nodes with available pods: 2 Mar 20 21:55:55.077: INFO: Number of running nodes: 2, number of available pods: 2 Mar 20 21:55:55.077: INFO: Update the DaemonSet to trigger a rollout Mar 20 21:55:55.083: INFO: Updating DaemonSet daemon-set Mar 20 21:56:10.101: INFO: Roll back the DaemonSet before rollout is complete Mar 20 21:56:10.106: INFO: Updating DaemonSet daemon-set Mar 20 21:56:10.106: INFO: Make sure DaemonSet rollback is complete Mar 20 21:56:10.120: INFO: Wrong image for pod: daemon-set-bzpbj. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 20 21:56:10.120: INFO: Pod daemon-set-bzpbj is not available Mar 20 21:56:10.158: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:56:11.162: INFO: Wrong image for pod: daemon-set-bzpbj. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 20 21:56:11.162: INFO: Pod daemon-set-bzpbj is not available Mar 20 21:56:11.165: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 20 21:56:12.173: INFO: Pod daemon-set-5zzbg is not available Mar 20 21:56:12.180: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4396, will wait for the garbage collector to delete the pods Mar 20 21:56:12.253: INFO: Deleting DaemonSet.extensions daemon-set took: 15.65691ms Mar 20 21:56:12.554: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.263061ms Mar 20 21:56:19.562: INFO: Number of nodes with available pods: 0 Mar 20 21:56:19.562: INFO: Number of running nodes: 0, number of available pods: 0 Mar 20 21:56:19.565: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4396/daemonsets","resourceVersion":"1392011"},"items":null} Mar 20 21:56:19.568: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4396/pods","resourceVersion":"1392011"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:56:19.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4396" for this suite. • [SLOW TEST:28.657 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":176,"skipped":2704,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:56:19.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 21:56:19.641: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 20 21:56:21.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4906 create -f -' Mar 20 21:56:24.328: INFO: stderr: "" Mar 20 21:56:24.328: INFO: stdout: "e2e-test-crd-publish-openapi-7558-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 20 21:56:24.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4906 delete e2e-test-crd-publish-openapi-7558-crds test-cr' Mar 20 21:56:24.428: INFO: stderr: "" Mar 20 21:56:24.428: INFO: stdout: "e2e-test-crd-publish-openapi-7558-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 20 21:56:24.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4906 apply -f -' Mar 20 21:56:24.748: INFO: stderr: "" Mar 20 21:56:24.748: INFO: stdout: "e2e-test-crd-publish-openapi-7558-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 20 21:56:24.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4906 delete e2e-test-crd-publish-openapi-7558-crds test-cr' Mar 20 21:56:24.865: INFO: stderr: "" Mar 20 21:56:24.865: INFO: stdout: "e2e-test-crd-publish-openapi-7558-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 20 21:56:24.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7558-crds' Mar 20 21:56:25.087: INFO: stderr: "" Mar 20 21:56:25.087: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7558-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:56:26.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4906" for this suite. • [SLOW TEST:7.373 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":177,"skipped":2714,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:56:26.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-47641819-1f72-47d3-bf67-f183dd90afa6 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:56:27.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-124" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":178,"skipped":2727,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:56:27.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 21:56:27.097: INFO: Waiting up to 5m0s for pod "downwardapi-volume-262c470f-2378-4a06-9af5-ee45cbb158f5" in namespace "projected-3948" to be "success or failure" Mar 20 21:56:27.114: INFO: Pod "downwardapi-volume-262c470f-2378-4a06-9af5-ee45cbb158f5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.441923ms Mar 20 21:56:29.117: INFO: Pod "downwardapi-volume-262c470f-2378-4a06-9af5-ee45cbb158f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020319582s Mar 20 21:56:31.122: INFO: Pod "downwardapi-volume-262c470f-2378-4a06-9af5-ee45cbb158f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024653613s STEP: Saw pod success Mar 20 21:56:31.122: INFO: Pod "downwardapi-volume-262c470f-2378-4a06-9af5-ee45cbb158f5" satisfied condition "success or failure" Mar 20 21:56:31.125: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-262c470f-2378-4a06-9af5-ee45cbb158f5 container client-container: STEP: delete the pod Mar 20 21:56:31.163: INFO: Waiting for pod downwardapi-volume-262c470f-2378-4a06-9af5-ee45cbb158f5 to disappear Mar 20 21:56:31.167: INFO: Pod downwardapi-volume-262c470f-2378-4a06-9af5-ee45cbb158f5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 21:56:31.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3948" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2744,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 21:56:31.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-c79cd40b-2214-4d94-aa3d-3dd0209c04de in namespace container-probe-4861 Mar 20 21:56:35.256: INFO: Started pod test-webserver-c79cd40b-2214-4d94-aa3d-3dd0209c04de in namespace container-probe-4861 STEP: checking the pod's current state and verifying that restartCount is present Mar 20 21:56:35.259: INFO: Initial restart count of pod test-webserver-c79cd40b-2214-4d94-aa3d-3dd0209c04de is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:00:35.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4861" for this suite. • [SLOW TEST:244.685 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2746,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:00:35.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-b4aa82a1-88ef-4d28-b0ed-89766b3018eb in namespace container-probe-4160 Mar 20 22:00:40.008: INFO: Started pod liveness-b4aa82a1-88ef-4d28-b0ed-89766b3018eb in namespace container-probe-4160 STEP: checking the pod's current state and verifying that restartCount is present Mar 20 22:00:40.010: INFO: Initial restart count of pod liveness-b4aa82a1-88ef-4d28-b0ed-89766b3018eb is 0 Mar 20 22:01:02.058: INFO: Restart count of pod container-probe-4160/liveness-b4aa82a1-88ef-4d28-b0ed-89766b3018eb is now 1 (22.047689348s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:01:02.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4160" for this suite. • [SLOW TEST:26.248 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2747,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:01:02.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 22:01:02.833: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 22:01:04.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338462, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338462, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338463, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338462, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 22:01:07.891: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 22:01:07.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2730-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:01:09.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2271" for this suite. STEP: Destroying namespace "webhook-2271-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.088 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":182,"skipped":2756,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:01:09.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 20 22:01:09.291: INFO: Waiting up to 5m0s for pod "pod-e8a21394-396f-4dea-8eaa-7e4a5bd920d7" in namespace "emptydir-681" to be "success or failure" Mar 20 22:01:09.294: INFO: Pod "pod-e8a21394-396f-4dea-8eaa-7e4a5bd920d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.922063ms Mar 20 22:01:11.315: INFO: Pod "pod-e8a21394-396f-4dea-8eaa-7e4a5bd920d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024234559s Mar 20 22:01:13.319: INFO: Pod "pod-e8a21394-396f-4dea-8eaa-7e4a5bd920d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027863788s STEP: Saw pod success Mar 20 22:01:13.319: INFO: Pod "pod-e8a21394-396f-4dea-8eaa-7e4a5bd920d7" satisfied condition "success or failure" Mar 20 22:01:13.322: INFO: Trying to get logs from node jerma-worker pod pod-e8a21394-396f-4dea-8eaa-7e4a5bd920d7 container test-container: STEP: delete the pod Mar 20 22:01:13.352: INFO: Waiting for pod pod-e8a21394-396f-4dea-8eaa-7e4a5bd920d7 to disappear Mar 20 22:01:13.356: INFO: Pod pod-e8a21394-396f-4dea-8eaa-7e4a5bd920d7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:01:13.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-681" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2772,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:01:13.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:01:44.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9365" for this suite. STEP: Destroying namespace "nsdeletetest-9520" for this suite. Mar 20 22:01:44.590: INFO: Namespace nsdeletetest-9520 was already deleted STEP: Destroying namespace "nsdeletetest-1140" for this suite. • [SLOW TEST:31.230 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":184,"skipped":2776,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:01:44.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-786b7a04-ae0a-446f-a311-4c55f4aeab78 STEP: Creating a pod to test consume configMaps Mar 20 22:01:44.668: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cb50eefc-08f8-4c66-850b-e76143607fc1" in namespace "projected-2750" to be "success or failure" Mar 20 22:01:44.689: INFO: Pod "pod-projected-configmaps-cb50eefc-08f8-4c66-850b-e76143607fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.913809ms Mar 20 22:01:46.693: INFO: Pod "pod-projected-configmaps-cb50eefc-08f8-4c66-850b-e76143607fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025021383s Mar 20 22:01:48.698: INFO: Pod "pod-projected-configmaps-cb50eefc-08f8-4c66-850b-e76143607fc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029469158s STEP: Saw pod success Mar 20 22:01:48.698: INFO: Pod "pod-projected-configmaps-cb50eefc-08f8-4c66-850b-e76143607fc1" satisfied condition "success or failure" Mar 20 22:01:48.701: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-cb50eefc-08f8-4c66-850b-e76143607fc1 container projected-configmap-volume-test: STEP: delete the pod Mar 20 22:01:48.746: INFO: Waiting for pod pod-projected-configmaps-cb50eefc-08f8-4c66-850b-e76143607fc1 to disappear Mar 20 22:01:48.758: INFO: Pod pod-projected-configmaps-cb50eefc-08f8-4c66-850b-e76143607fc1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:01:48.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2750" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2789,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:01:48.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 20 22:01:48.833: INFO: Waiting up to 5m0s for pod "var-expansion-87ca865b-226a-4042-b8f1-4e76d4f0e79c" in namespace "var-expansion-703" to be "success or failure" Mar 20 22:01:48.836: INFO: Pod "var-expansion-87ca865b-226a-4042-b8f1-4e76d4f0e79c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.642728ms Mar 20 22:01:50.840: INFO: Pod "var-expansion-87ca865b-226a-4042-b8f1-4e76d4f0e79c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00733315s Mar 20 22:01:52.845: INFO: Pod "var-expansion-87ca865b-226a-4042-b8f1-4e76d4f0e79c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012017671s STEP: Saw pod success Mar 20 22:01:52.845: INFO: Pod "var-expansion-87ca865b-226a-4042-b8f1-4e76d4f0e79c" satisfied condition "success or failure" Mar 20 22:01:52.848: INFO: Trying to get logs from node jerma-worker pod var-expansion-87ca865b-226a-4042-b8f1-4e76d4f0e79c container dapi-container: STEP: delete the pod Mar 20 22:01:52.868: INFO: Waiting for pod var-expansion-87ca865b-226a-4042-b8f1-4e76d4f0e79c to disappear Mar 20 22:01:52.889: INFO: Pod var-expansion-87ca865b-226a-4042-b8f1-4e76d4f0e79c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:01:52.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-703" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":2791,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:01:52.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:01:53.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9541" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":187,"skipped":2804,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:01:53.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 20 22:02:01.218: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 20 22:02:01.238: INFO: Pod pod-with-prestop-http-hook still exists Mar 20 22:02:03.238: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 20 22:02:03.243: INFO: Pod pod-with-prestop-http-hook still exists Mar 20 22:02:05.238: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 20 22:02:05.243: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:02:05.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4079" for this suite. • [SLOW TEST:12.200 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":2818,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:02:05.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 20 22:02:09.887: INFO: Successfully updated pod "labelsupdated2fe595d-4644-49f2-92be-a7a1f6549192" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:02:11.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8140" for this suite. • [SLOW TEST:6.667 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":2831,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:02:11.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 20 22:02:16.550: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5e9638f6-99ec-47a1-9186-4e45537fe40c" Mar 20 22:02:16.550: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5e9638f6-99ec-47a1-9186-4e45537fe40c" in namespace "pods-8870" to be "terminated due to deadline exceeded" Mar 20 22:02:16.555: INFO: Pod "pod-update-activedeadlineseconds-5e9638f6-99ec-47a1-9186-4e45537fe40c": Phase="Running", Reason="", readiness=true. Elapsed: 5.284076ms Mar 20 22:02:18.559: INFO: Pod "pod-update-activedeadlineseconds-5e9638f6-99ec-47a1-9186-4e45537fe40c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.009558996s Mar 20 22:02:18.560: INFO: Pod "pod-update-activedeadlineseconds-5e9638f6-99ec-47a1-9186-4e45537fe40c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:02:18.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8870" for this suite. • [SLOW TEST:6.643 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":2840,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:02:18.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 20 22:02:18.613: INFO: PodSpec: initContainers in spec.initContainers Mar 20 22:03:09.434: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ac5c5407-6f7e-482f-b611-20f8f8f501a8", GenerateName:"", Namespace:"init-container-4583", SelfLink:"/api/v1/namespaces/init-container-4583/pods/pod-init-ac5c5407-6f7e-482f-b611-20f8f8f501a8", UID:"1d0efaf5-1b6e-4c1f-9038-a23a359078a6", ResourceVersion:"1393684", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720338538, loc:(*time.Location)(0x7d83a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"613678197"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-q7ms6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00227a000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-q7ms6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-q7ms6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-q7ms6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a82068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00211a000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a820f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a82110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002a82118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002a8211c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338538, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338538, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338538, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720338538, loc:(*time.Location)(0x7d83a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.56", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.56"}}, StartTime:(*v1.Time)(0xc002bc4040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002bc4080), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0004fc070)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://1563b0e4f4af4bc7fbfe73b3de48f21830a456539f724dd681c41428ae535196", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bc40a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bc4060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002a8219f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:03:09.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4583" for this suite. • [SLOW TEST:51.071 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":191,"skipped":2859,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:03:09.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6854/configmap-test-59814ca9-8a24-4435-96bf-5a1f57a851e5 STEP: Creating a pod to test consume configMaps Mar 20 22:03:09.759: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea5715ac-91c3-4952-8181-c96d1db8272e" in namespace "configmap-6854" to be "success or failure" Mar 20 22:03:09.762: INFO: Pod "pod-configmaps-ea5715ac-91c3-4952-8181-c96d1db8272e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340894ms Mar 20 22:03:11.802: INFO: Pod "pod-configmaps-ea5715ac-91c3-4952-8181-c96d1db8272e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042462831s Mar 20 22:03:13.807: INFO: Pod "pod-configmaps-ea5715ac-91c3-4952-8181-c96d1db8272e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047211384s STEP: Saw pod success Mar 20 22:03:13.807: INFO: Pod "pod-configmaps-ea5715ac-91c3-4952-8181-c96d1db8272e" satisfied condition "success or failure" Mar 20 22:03:13.810: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ea5715ac-91c3-4952-8181-c96d1db8272e container env-test: STEP: delete the pod Mar 20 22:03:13.875: INFO: Waiting for pod pod-configmaps-ea5715ac-91c3-4952-8181-c96d1db8272e to disappear Mar 20 22:03:13.889: INFO: Pod pod-configmaps-ea5715ac-91c3-4952-8181-c96d1db8272e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:03:13.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6854" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":2885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:03:13.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 20 22:03:13.935: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix067808443/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:03:14.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3866" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":193,"skipped":2960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:03:14.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 20 22:03:14.068: INFO: Waiting up to 5m0s for pod "downward-api-1823eff3-437f-4a16-8f71-4be8a73a20c5" in namespace "downward-api-1505" to be "success or failure" Mar 20 22:03:14.071: INFO: Pod "downward-api-1823eff3-437f-4a16-8f71-4be8a73a20c5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.622357ms Mar 20 22:03:16.075: INFO: Pod "downward-api-1823eff3-437f-4a16-8f71-4be8a73a20c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00772692s Mar 20 22:03:18.079: INFO: Pod "downward-api-1823eff3-437f-4a16-8f71-4be8a73a20c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011534833s STEP: Saw pod success Mar 20 22:03:18.079: INFO: Pod "downward-api-1823eff3-437f-4a16-8f71-4be8a73a20c5" satisfied condition "success or failure" Mar 20 22:03:18.082: INFO: Trying to get logs from node jerma-worker pod downward-api-1823eff3-437f-4a16-8f71-4be8a73a20c5 container dapi-container: STEP: delete the pod Mar 20 22:03:18.116: INFO: Waiting for pod downward-api-1823eff3-437f-4a16-8f71-4be8a73a20c5 to disappear Mar 20 22:03:18.125: INFO: Pod downward-api-1823eff3-437f-4a16-8f71-4be8a73a20c5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:03:18.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1505" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":2986,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:03:18.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0320 22:03:29.178410 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 20 22:03:29.178: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:03:29.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9451" for this suite. • [SLOW TEST:11.055 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":195,"skipped":2993,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:03:29.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-190e1439-b85a-4952-b93e-e4de1d6b77c9 STEP: Creating a pod to test consume secrets Mar 20 22:03:29.271: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bb13d3d7-9385-49c3-a0bc-0df8d1d726fc" in namespace "projected-899" to be "success or failure" Mar 20 22:03:29.293: INFO: Pod "pod-projected-secrets-bb13d3d7-9385-49c3-a0bc-0df8d1d726fc": Phase="Pending", Reason="", readiness=false. Elapsed: 21.574418ms Mar 20 22:03:31.297: INFO: Pod "pod-projected-secrets-bb13d3d7-9385-49c3-a0bc-0df8d1d726fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025732336s Mar 20 22:03:33.301: INFO: Pod "pod-projected-secrets-bb13d3d7-9385-49c3-a0bc-0df8d1d726fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030203729s STEP: Saw pod success Mar 20 22:03:33.302: INFO: Pod "pod-projected-secrets-bb13d3d7-9385-49c3-a0bc-0df8d1d726fc" satisfied condition "success or failure" Mar 20 22:03:33.304: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-bb13d3d7-9385-49c3-a0bc-0df8d1d726fc container projected-secret-volume-test: STEP: delete the pod Mar 20 22:03:33.339: INFO: Waiting for pod pod-projected-secrets-bb13d3d7-9385-49c3-a0bc-0df8d1d726fc to disappear Mar 20 22:03:33.357: INFO: Pod pod-projected-secrets-bb13d3d7-9385-49c3-a0bc-0df8d1d726fc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:03:33.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-899" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3005,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:03:33.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4804 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-4804 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4804 Mar 20 22:03:33.489: INFO: Found 0 stateful pods, waiting for 1 Mar 20 22:03:43.494: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 20 22:03:43.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 20 22:03:43.777: INFO: stderr: "I0320 22:03:43.636099 2507 log.go:172] (0xc0007f6b00) (0xc0007e4000) Create stream\nI0320 22:03:43.636159 2507 log.go:172] (0xc0007f6b00) (0xc0007e4000) Stream added, broadcasting: 1\nI0320 22:03:43.639019 2507 log.go:172] (0xc0007f6b00) Reply frame received for 1\nI0320 22:03:43.639068 2507 log.go:172] (0xc0007f6b00) (0xc00062bb80) Create stream\nI0320 22:03:43.639087 2507 log.go:172] (0xc0007f6b00) (0xc00062bb80) Stream added, broadcasting: 3\nI0320 22:03:43.640092 2507 log.go:172] (0xc0007f6b00) Reply frame received for 3\nI0320 22:03:43.640152 2507 log.go:172] (0xc0007f6b00) (0xc0007e4140) Create stream\nI0320 22:03:43.640179 2507 log.go:172] (0xc0007f6b00) (0xc0007e4140) Stream added, broadcasting: 5\nI0320 22:03:43.641646 2507 log.go:172] (0xc0007f6b00) Reply frame received for 5\nI0320 22:03:43.732229 2507 log.go:172] (0xc0007f6b00) Data frame received for 5\nI0320 22:03:43.732256 2507 log.go:172] (0xc0007e4140) (5) Data frame handling\nI0320 22:03:43.732272 2507 log.go:172] (0xc0007e4140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0320 22:03:43.770663 2507 log.go:172] (0xc0007f6b00) Data frame received for 3\nI0320 22:03:43.770707 2507 log.go:172] (0xc00062bb80) (3) Data frame handling\nI0320 22:03:43.770743 2507 log.go:172] (0xc00062bb80) (3) Data frame sent\nI0320 22:03:43.770762 2507 log.go:172] (0xc0007f6b00) Data frame received for 3\nI0320 22:03:43.770780 2507 log.go:172] (0xc00062bb80) (3) Data frame handling\nI0320 22:03:43.770885 2507 log.go:172] (0xc0007f6b00) Data frame received for 5\nI0320 22:03:43.770921 2507 log.go:172] (0xc0007e4140) (5) Data frame handling\nI0320 22:03:43.772845 2507 log.go:172] (0xc0007f6b00) Data frame received for 1\nI0320 22:03:43.772874 2507 log.go:172] (0xc0007e4000) (1) Data frame handling\nI0320 22:03:43.772893 2507 log.go:172] (0xc0007e4000) (1) Data frame sent\nI0320 22:03:43.772912 2507 log.go:172] (0xc0007f6b00) (0xc0007e4000) Stream removed, broadcasting: 1\nI0320 22:03:43.772933 2507 log.go:172] (0xc0007f6b00) Go away received\nI0320 22:03:43.773673 2507 log.go:172] (0xc0007f6b00) (0xc0007e4000) Stream removed, broadcasting: 1\nI0320 22:03:43.773702 2507 log.go:172] (0xc0007f6b00) (0xc00062bb80) Stream removed, broadcasting: 3\nI0320 22:03:43.773714 2507 log.go:172] (0xc0007f6b00) (0xc0007e4140) Stream removed, broadcasting: 5\n" Mar 20 22:03:43.778: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 20 22:03:43.778: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 20 22:03:43.781: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 20 22:03:53.785: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 20 22:03:53.786: INFO: Waiting for statefulset status.replicas updated to 0 Mar 20 22:03:53.798: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 22:03:53.798: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:33 +0000 UTC }] Mar 20 22:03:53.798: INFO: Mar 20 22:03:53.798: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 20 22:03:54.803: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995811051s Mar 20 22:03:55.807: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990958993s Mar 20 22:03:56.819: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986271538s Mar 20 22:03:57.822: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97494203s Mar 20 22:03:58.827: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971005735s Mar 20 22:03:59.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966171477s Mar 20 22:04:00.837: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.960775369s Mar 20 22:04:01.856: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.956327199s Mar 20 22:04:02.862: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.099073ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4804 Mar 20 22:04:03.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:04:04.090: INFO: stderr: "I0320 22:04:04.006100 2528 log.go:172] (0xc000b44000) (0xc000528640) Create stream\nI0320 22:04:04.006174 2528 log.go:172] (0xc000b44000) (0xc000528640) Stream added, broadcasting: 1\nI0320 22:04:04.009616 2528 log.go:172] (0xc000b44000) Reply frame received for 1\nI0320 22:04:04.009669 2528 log.go:172] (0xc000b44000) (0xc000237400) Create stream\nI0320 22:04:04.009688 2528 log.go:172] (0xc000b44000) (0xc000237400) Stream added, broadcasting: 3\nI0320 22:04:04.010819 2528 log.go:172] (0xc000b44000) Reply frame received for 3\nI0320 22:04:04.010885 2528 log.go:172] (0xc000b44000) (0xc000aa0000) Create stream\nI0320 22:04:04.010915 2528 log.go:172] (0xc000b44000) (0xc000aa0000) Stream added, broadcasting: 5\nI0320 22:04:04.012045 2528 log.go:172] (0xc000b44000) Reply frame received for 5\nI0320 22:04:04.084427 2528 log.go:172] (0xc000b44000) Data frame received for 5\nI0320 22:04:04.084468 2528 log.go:172] (0xc000aa0000) (5) Data frame handling\nI0320 22:04:04.084491 2528 log.go:172] (0xc000aa0000) (5) Data frame sent\nI0320 22:04:04.084507 2528 log.go:172] (0xc000b44000) Data frame received for 5\nI0320 22:04:04.084517 2528 log.go:172] (0xc000aa0000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0320 22:04:04.084562 2528 log.go:172] (0xc000b44000) Data frame received for 3\nI0320 22:04:04.084590 2528 log.go:172] (0xc000237400) (3) Data frame handling\nI0320 22:04:04.084624 2528 log.go:172] (0xc000237400) (3) Data frame sent\nI0320 22:04:04.084646 2528 log.go:172] (0xc000b44000) Data frame received for 3\nI0320 22:04:04.084656 2528 log.go:172] (0xc000237400) (3) Data frame handling\nI0320 22:04:04.086226 2528 log.go:172] (0xc000b44000) Data frame received for 1\nI0320 22:04:04.086246 2528 log.go:172] (0xc000528640) (1) Data frame handling\nI0320 22:04:04.086267 2528 log.go:172] (0xc000528640) (1) Data frame sent\nI0320 22:04:04.086282 2528 log.go:172] (0xc000b44000) (0xc000528640) Stream removed, broadcasting: 1\nI0320 22:04:04.086396 2528 log.go:172] (0xc000b44000) Go away received\nI0320 22:04:04.086676 2528 log.go:172] (0xc000b44000) (0xc000528640) Stream removed, broadcasting: 1\nI0320 22:04:04.086700 2528 log.go:172] (0xc000b44000) (0xc000237400) Stream removed, broadcasting: 3\nI0320 22:04:04.086712 2528 log.go:172] (0xc000b44000) (0xc000aa0000) Stream removed, broadcasting: 5\n" Mar 20 22:04:04.090: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 20 22:04:04.090: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 20 22:04:04.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:04:04.318: INFO: stderr: "I0320 22:04:04.232368 2551 log.go:172] (0xc0005da210) (0xc000a22000) Create stream\nI0320 22:04:04.232421 2551 log.go:172] (0xc0005da210) (0xc000a22000) Stream added, broadcasting: 1\nI0320 22:04:04.238924 2551 log.go:172] (0xc0005da210) Reply frame received for 1\nI0320 22:04:04.238964 2551 log.go:172] (0xc0005da210) (0xc0006a1b80) Create stream\nI0320 22:04:04.238974 2551 log.go:172] (0xc0005da210) (0xc0006a1b80) Stream added, broadcasting: 3\nI0320 22:04:04.240278 2551 log.go:172] (0xc0005da210) Reply frame received for 3\nI0320 22:04:04.240321 2551 log.go:172] (0xc0005da210) (0xc000a220a0) Create stream\nI0320 22:04:04.240335 2551 log.go:172] (0xc0005da210) (0xc000a220a0) Stream added, broadcasting: 5\nI0320 22:04:04.241424 2551 log.go:172] (0xc0005da210) Reply frame received for 5\nI0320 22:04:04.311535 2551 log.go:172] (0xc0005da210) Data frame received for 5\nI0320 22:04:04.311576 2551 log.go:172] (0xc000a220a0) (5) Data frame handling\nI0320 22:04:04.311589 2551 log.go:172] (0xc000a220a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0320 22:04:04.311610 2551 log.go:172] (0xc0005da210) Data frame received for 3\nI0320 22:04:04.311658 2551 log.go:172] (0xc0006a1b80) (3) Data frame handling\nI0320 22:04:04.311685 2551 log.go:172] (0xc0006a1b80) (3) Data frame sent\nI0320 22:04:04.311712 2551 log.go:172] (0xc0005da210) Data frame received for 3\nI0320 22:04:04.311731 2551 log.go:172] (0xc0006a1b80) (3) Data frame handling\nI0320 22:04:04.311760 2551 log.go:172] (0xc0005da210) Data frame received for 5\nI0320 22:04:04.311782 2551 log.go:172] (0xc000a220a0) (5) Data frame handling\nI0320 22:04:04.313026 2551 log.go:172] (0xc0005da210) Data frame received for 1\nI0320 22:04:04.313062 2551 log.go:172] (0xc000a22000) (1) Data frame handling\nI0320 22:04:04.313085 2551 log.go:172] (0xc000a22000) (1) Data frame sent\nI0320 22:04:04.313386 2551 log.go:172] (0xc0005da210) (0xc000a22000) Stream removed, broadcasting: 1\nI0320 22:04:04.313448 2551 log.go:172] (0xc0005da210) Go away received\nI0320 22:04:04.313950 2551 log.go:172] (0xc0005da210) (0xc000a22000) Stream removed, broadcasting: 1\nI0320 22:04:04.313979 2551 log.go:172] (0xc0005da210) (0xc0006a1b80) Stream removed, broadcasting: 3\nI0320 22:04:04.313992 2551 log.go:172] (0xc0005da210) (0xc000a220a0) Stream removed, broadcasting: 5\n" Mar 20 22:04:04.318: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 20 22:04:04.318: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 20 22:04:04.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:04:04.515: INFO: stderr: "I0320 22:04:04.446265 2573 log.go:172] (0xc000a20160) (0xc0004054a0) Create stream\nI0320 22:04:04.446314 2573 log.go:172] (0xc000a20160) (0xc0004054a0) Stream added, broadcasting: 1\nI0320 22:04:04.448755 2573 log.go:172] (0xc000a20160) Reply frame received for 1\nI0320 22:04:04.448813 2573 log.go:172] (0xc000a20160) (0xc000ad4000) Create stream\nI0320 22:04:04.448831 2573 log.go:172] (0xc000a20160) (0xc000ad4000) Stream added, broadcasting: 3\nI0320 22:04:04.450015 2573 log.go:172] (0xc000a20160) Reply frame received for 3\nI0320 22:04:04.450064 2573 log.go:172] (0xc000a20160) (0xc000b60000) Create stream\nI0320 22:04:04.450080 2573 log.go:172] (0xc000a20160) (0xc000b60000) Stream added, broadcasting: 5\nI0320 22:04:04.450958 2573 log.go:172] (0xc000a20160) Reply frame received for 5\nI0320 22:04:04.507853 2573 log.go:172] (0xc000a20160) Data frame received for 3\nI0320 22:04:04.507899 2573 log.go:172] (0xc000a20160) Data frame received for 5\nI0320 22:04:04.507933 2573 log.go:172] (0xc000b60000) (5) Data frame handling\nI0320 22:04:04.507948 2573 log.go:172] (0xc000b60000) (5) Data frame sent\nI0320 22:04:04.507961 2573 log.go:172] (0xc000a20160) Data frame received for 5\nI0320 22:04:04.507970 2573 log.go:172] (0xc000b60000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0320 22:04:04.508001 2573 log.go:172] (0xc000ad4000) (3) Data frame handling\nI0320 22:04:04.508026 2573 log.go:172] (0xc000ad4000) (3) Data frame sent\nI0320 22:04:04.508043 2573 log.go:172] (0xc000a20160) Data frame received for 3\nI0320 22:04:04.508049 2573 log.go:172] (0xc000ad4000) (3) Data frame handling\nI0320 22:04:04.509828 2573 log.go:172] (0xc000a20160) Data frame received for 1\nI0320 22:04:04.509854 2573 log.go:172] (0xc0004054a0) (1) Data frame handling\nI0320 22:04:04.509889 2573 log.go:172] (0xc0004054a0) (1) Data frame sent\nI0320 22:04:04.509930 2573 log.go:172] (0xc000a20160) (0xc0004054a0) Stream removed, broadcasting: 1\nI0320 22:04:04.510088 2573 log.go:172] (0xc000a20160) Go away received\nI0320 22:04:04.510296 2573 log.go:172] (0xc000a20160) (0xc0004054a0) Stream removed, broadcasting: 1\nI0320 22:04:04.510314 2573 log.go:172] (0xc000a20160) (0xc000ad4000) Stream removed, broadcasting: 3\nI0320 22:04:04.510326 2573 log.go:172] (0xc000a20160) (0xc000b60000) Stream removed, broadcasting: 5\n" Mar 20 22:04:04.515: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 20 22:04:04.515: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 20 22:04:04.545: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 20 22:04:14.550: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 20 22:04:14.550: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 20 22:04:14.550: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 20 22:04:14.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 20 22:04:14.788: INFO: stderr: "I0320 22:04:14.699092 2593 log.go:172] (0xc0000f1600) (0xc000aca0a0) Create stream\nI0320 22:04:14.699150 2593 log.go:172] (0xc0000f1600) (0xc000aca0a0) Stream added, broadcasting: 1\nI0320 22:04:14.701967 2593 log.go:172] (0xc0000f1600) Reply frame received for 1\nI0320 22:04:14.702000 2593 log.go:172] (0xc0000f1600) (0xc0005edae0) Create stream\nI0320 22:04:14.702009 2593 log.go:172] (0xc0000f1600) (0xc0005edae0) Stream added, broadcasting: 3\nI0320 22:04:14.702868 2593 log.go:172] (0xc0000f1600) Reply frame received for 3\nI0320 22:04:14.702909 2593 log.go:172] (0xc0000f1600) (0xc000aca1e0) Create stream\nI0320 22:04:14.702922 2593 log.go:172] (0xc0000f1600) (0xc000aca1e0) Stream added, broadcasting: 5\nI0320 22:04:14.704863 2593 log.go:172] (0xc0000f1600) Reply frame received for 5\nI0320 22:04:14.781914 2593 log.go:172] (0xc0000f1600) Data frame received for 3\nI0320 22:04:14.781949 2593 log.go:172] (0xc0005edae0) (3) Data frame handling\nI0320 22:04:14.781990 2593 log.go:172] (0xc0000f1600) Data frame received for 5\nI0320 22:04:14.782033 2593 log.go:172] (0xc000aca1e0) (5) Data frame handling\nI0320 22:04:14.782056 2593 log.go:172] (0xc000aca1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0320 22:04:14.782075 2593 log.go:172] (0xc0000f1600) Data frame received for 5\nI0320 22:04:14.782092 2593 log.go:172] (0xc000aca1e0) (5) Data frame handling\nI0320 22:04:14.782128 2593 log.go:172] (0xc0005edae0) (3) Data frame sent\nI0320 22:04:14.782155 2593 log.go:172] (0xc0000f1600) Data frame received for 3\nI0320 22:04:14.782173 2593 log.go:172] (0xc0005edae0) (3) Data frame handling\nI0320 22:04:14.783832 2593 log.go:172] (0xc0000f1600) Data frame received for 1\nI0320 22:04:14.783855 2593 log.go:172] (0xc000aca0a0) (1) Data frame handling\nI0320 22:04:14.783869 2593 log.go:172] (0xc000aca0a0) (1) Data frame sent\nI0320 22:04:14.783885 2593 log.go:172] (0xc0000f1600) (0xc000aca0a0) Stream removed, broadcasting: 1\nI0320 22:04:14.783901 2593 log.go:172] (0xc0000f1600) Go away received\nI0320 22:04:14.784337 2593 log.go:172] (0xc0000f1600) (0xc000aca0a0) Stream removed, broadcasting: 1\nI0320 22:04:14.784363 2593 log.go:172] (0xc0000f1600) (0xc0005edae0) Stream removed, broadcasting: 3\nI0320 22:04:14.784377 2593 log.go:172] (0xc0000f1600) (0xc000aca1e0) Stream removed, broadcasting: 5\n" Mar 20 22:04:14.789: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 20 22:04:14.789: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 20 22:04:14.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 20 22:04:15.090: INFO: stderr: "I0320 22:04:14.911994 2617 log.go:172] (0xc000b18000) (0xc0005f1a40) Create stream\nI0320 22:04:14.912063 2617 log.go:172] (0xc000b18000) (0xc0005f1a40) Stream added, broadcasting: 1\nI0320 22:04:14.914013 2617 log.go:172] (0xc000b18000) Reply frame received for 1\nI0320 22:04:14.914063 2617 log.go:172] (0xc000b18000) (0xc0009f4000) Create stream\nI0320 22:04:14.914077 2617 log.go:172] (0xc000b18000) (0xc0009f4000) Stream added, broadcasting: 3\nI0320 22:04:14.915138 2617 log.go:172] (0xc000b18000) Reply frame received for 3\nI0320 22:04:14.915201 2617 log.go:172] (0xc000b18000) (0xc0009f40a0) Create stream\nI0320 22:04:14.915238 2617 log.go:172] (0xc000b18000) (0xc0009f40a0) Stream added, broadcasting: 5\nI0320 22:04:14.916129 2617 log.go:172] (0xc000b18000) Reply frame received for 5\nI0320 22:04:14.976343 2617 log.go:172] (0xc000b18000) Data frame received for 5\nI0320 22:04:14.976368 2617 log.go:172] (0xc0009f40a0) (5) Data frame handling\nI0320 22:04:14.976385 2617 log.go:172] (0xc0009f40a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0320 22:04:15.082581 2617 log.go:172] (0xc000b18000) Data frame received for 3\nI0320 22:04:15.082632 2617 log.go:172] (0xc0009f4000) (3) Data frame handling\nI0320 22:04:15.082669 2617 log.go:172] (0xc0009f4000) (3) Data frame sent\nI0320 22:04:15.082730 2617 log.go:172] (0xc000b18000) Data frame received for 5\nI0320 22:04:15.082810 2617 log.go:172] (0xc0009f40a0) (5) Data frame handling\nI0320 22:04:15.082837 2617 log.go:172] (0xc000b18000) Data frame received for 3\nI0320 22:04:15.082858 2617 log.go:172] (0xc0009f4000) (3) Data frame handling\nI0320 22:04:15.084564 2617 log.go:172] (0xc000b18000) Data frame received for 1\nI0320 22:04:15.084592 2617 log.go:172] (0xc0005f1a40) (1) Data frame handling\nI0320 22:04:15.084613 2617 log.go:172] (0xc0005f1a40) (1) Data frame sent\nI0320 22:04:15.084633 2617 log.go:172] (0xc000b18000) (0xc0005f1a40) Stream removed, broadcasting: 1\nI0320 22:04:15.084657 2617 log.go:172] (0xc000b18000) Go away received\nI0320 22:04:15.085028 2617 log.go:172] (0xc000b18000) (0xc0005f1a40) Stream removed, broadcasting: 1\nI0320 22:04:15.085055 2617 log.go:172] (0xc000b18000) (0xc0009f4000) Stream removed, broadcasting: 3\nI0320 22:04:15.085068 2617 log.go:172] (0xc000b18000) (0xc0009f40a0) Stream removed, broadcasting: 5\n" Mar 20 22:04:15.090: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 20 22:04:15.090: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 20 22:04:15.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 20 22:04:15.328: INFO: stderr: "I0320 22:04:15.225383 2639 log.go:172] (0xc0009309a0) (0xc0006dfa40) Create stream\nI0320 22:04:15.225468 2639 log.go:172] (0xc0009309a0) (0xc0006dfa40) Stream added, broadcasting: 1\nI0320 22:04:15.227905 2639 log.go:172] (0xc0009309a0) Reply frame received for 1\nI0320 22:04:15.227981 2639 log.go:172] (0xc0009309a0) (0xc00037a000) Create stream\nI0320 22:04:15.228001 2639 log.go:172] (0xc0009309a0) (0xc00037a000) Stream added, broadcasting: 3\nI0320 22:04:15.228859 2639 log.go:172] (0xc0009309a0) Reply frame received for 3\nI0320 22:04:15.228892 2639 log.go:172] (0xc0009309a0) (0xc0006dfcc0) Create stream\nI0320 22:04:15.228902 2639 log.go:172] (0xc0009309a0) (0xc0006dfcc0) Stream added, broadcasting: 5\nI0320 22:04:15.230047 2639 log.go:172] (0xc0009309a0) Reply frame received for 5\nI0320 22:04:15.288447 2639 log.go:172] (0xc0009309a0) Data frame received for 5\nI0320 22:04:15.288473 2639 log.go:172] (0xc0006dfcc0) (5) Data frame handling\nI0320 22:04:15.288488 2639 log.go:172] (0xc0006dfcc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0320 22:04:15.322055 2639 log.go:172] (0xc0009309a0) Data frame received for 3\nI0320 22:04:15.322202 2639 log.go:172] (0xc00037a000) (3) Data frame handling\nI0320 22:04:15.322211 2639 log.go:172] (0xc00037a000) (3) Data frame sent\nI0320 22:04:15.322218 2639 log.go:172] (0xc0009309a0) Data frame received for 3\nI0320 22:04:15.322237 2639 log.go:172] (0xc0009309a0) Data frame received for 5\nI0320 22:04:15.322269 2639 log.go:172] (0xc0006dfcc0) (5) Data frame handling\nI0320 22:04:15.322295 2639 log.go:172] (0xc00037a000) (3) Data frame handling\nI0320 22:04:15.323881 2639 log.go:172] (0xc0009309a0) Data frame received for 1\nI0320 22:04:15.323912 2639 log.go:172] (0xc0006dfa40) (1) Data frame handling\nI0320 22:04:15.323946 2639 log.go:172] (0xc0006dfa40) (1) Data frame sent\nI0320 22:04:15.323972 2639 log.go:172] (0xc0009309a0) (0xc0006dfa40) Stream removed, broadcasting: 1\nI0320 22:04:15.324241 2639 log.go:172] (0xc0009309a0) Go away received\nI0320 22:04:15.324400 2639 log.go:172] (0xc0009309a0) (0xc0006dfa40) Stream removed, broadcasting: 1\nI0320 22:04:15.324422 2639 log.go:172] (0xc0009309a0) (0xc00037a000) Stream removed, broadcasting: 3\nI0320 22:04:15.324435 2639 log.go:172] (0xc0009309a0) (0xc0006dfcc0) Stream removed, broadcasting: 5\n" Mar 20 22:04:15.328: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 20 22:04:15.328: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 20 22:04:15.328: INFO: Waiting for statefulset status.replicas updated to 0 Mar 20 22:04:15.331: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 20 22:04:25.346: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 20 22:04:25.346: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 20 22:04:25.346: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 20 22:04:25.355: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 22:04:25.355: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:33 +0000 UTC }] Mar 20 22:04:25.355: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:25.355: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:25.355: INFO: Mar 20 22:04:25.355: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 20 22:04:26.359: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 22:04:26.359: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:33 +0000 UTC }] Mar 20 22:04:26.359: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:26.359: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:26.359: INFO: Mar 20 22:04:26.359: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 20 22:04:27.364: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 22:04:27.364: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:33 +0000 UTC }] Mar 20 22:04:27.364: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:27.364: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:27.364: INFO: Mar 20 22:04:27.364: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 20 22:04:28.368: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 22:04:28.368: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:28.369: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:28.369: INFO: Mar 20 22:04:28.369: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 20 22:04:29.373: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 22:04:29.373: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:29.373: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:29.373: INFO: Mar 20 22:04:29.373: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 20 22:04:30.378: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 22:04:30.378: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:30.378: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:30.378: INFO: Mar 20 22:04:30.378: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 20 22:04:31.382: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 22:04:31.382: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:31.383: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:31.383: INFO: Mar 20 22:04:31.383: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 20 22:04:32.387: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 22:04:32.387: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:32.387: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:32.387: INFO: Mar 20 22:04:32.387: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 20 22:04:33.392: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 22:04:33.392: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:33.392: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:33.392: INFO: Mar 20 22:04:33.392: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 20 22:04:34.396: INFO: POD NODE PHASE GRACE CONDITIONS Mar 20 22:04:34.396: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:34.396: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:04:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-20 22:03:53 +0000 UTC }] Mar 20 22:04:34.396: INFO: Mar 20 22:04:34.396: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4804 Mar 20 22:04:35.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:04:35.542: INFO: rc: 1 Mar 20 22:04:35.542: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 20 22:04:45.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:04:45.650: INFO: rc: 1 Mar 20 22:04:45.650: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:04:55.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:04:55.742: INFO: rc: 1 Mar 20 22:04:55.743: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:05:05.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:05:05.835: INFO: rc: 1 Mar 20 22:05:05.835: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:05:15.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:05:15.929: INFO: rc: 1 Mar 20 22:05:15.929: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:05:25.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:05:26.031: INFO: rc: 1 Mar 20 22:05:26.031: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:05:36.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:05:36.132: INFO: rc: 1 Mar 20 22:05:36.132: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:05:46.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:05:46.227: INFO: rc: 1 Mar 20 22:05:46.227: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:05:56.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:05:56.329: INFO: rc: 1 Mar 20 22:05:56.329: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:06:06.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:06:06.417: INFO: rc: 1 Mar 20 22:06:06.417: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:06:16.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:06:16.515: INFO: rc: 1 Mar 20 22:06:16.515: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:06:26.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:06:29.063: INFO: rc: 1 Mar 20 22:06:29.063: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:06:39.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:06:39.160: INFO: rc: 1 Mar 20 22:06:39.160: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:06:49.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:06:49.252: INFO: rc: 1 Mar 20 22:06:49.253: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:06:59.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:06:59.347: INFO: rc: 1 Mar 20 22:06:59.347: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:07:09.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:07:09.436: INFO: rc: 1 Mar 20 22:07:09.436: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:07:19.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:07:19.537: INFO: rc: 1 Mar 20 22:07:19.537: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:07:29.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:07:29.638: INFO: rc: 1 Mar 20 22:07:29.638: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:07:39.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:07:39.741: INFO: rc: 1 Mar 20 22:07:39.741: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:07:49.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:07:49.827: INFO: rc: 1 Mar 20 22:07:49.828: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:07:59.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:07:59.921: INFO: rc: 1 Mar 20 22:07:59.921: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:08:09.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:08:10.012: INFO: rc: 1 Mar 20 22:08:10.012: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:08:20.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:08:20.113: INFO: rc: 1 Mar 20 22:08:20.113: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:08:30.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:08:30.209: INFO: rc: 1 Mar 20 22:08:30.209: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:08:40.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:08:40.310: INFO: rc: 1 Mar 20 22:08:40.310: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:08:50.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:08:50.414: INFO: rc: 1 Mar 20 22:08:50.414: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:09:00.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:09:00.508: INFO: rc: 1 Mar 20 22:09:00.508: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:09:10.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:09:10.603: INFO: rc: 1 Mar 20 22:09:10.603: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:09:20.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:09:20.708: INFO: rc: 1 Mar 20 22:09:20.708: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:09:30.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:09:30.797: INFO: rc: 1 Mar 20 22:09:30.797: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 20 22:09:40.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:09:40.893: INFO: rc: 1 Mar 20 22:09:40.893: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Mar 20 22:09:40.893: INFO: Scaling statefulset ss to 0 Mar 20 22:09:40.902: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 20 22:09:40.905: INFO: Deleting all statefulset in ns statefulset-4804 Mar 20 22:09:40.908: INFO: Scaling statefulset ss to 0 Mar 20 22:09:40.916: INFO: Waiting for statefulset status.replicas updated to 0 Mar 20 22:09:40.918: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:09:40.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4804" for this suite. • [SLOW TEST:367.587 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":197,"skipped":3006,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:09:40.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7535 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7535 STEP: creating replication controller externalsvc in namespace services-7535 I0320 22:09:41.131700 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7535, replica count: 2 I0320 22:09:44.182149 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0320 22:09:47.182367 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 20 22:09:47.301: INFO: Creating new exec pod Mar 20 22:09:51.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7535 execpodg6qxd -- /bin/sh -x -c nslookup nodeport-service' Mar 20 22:09:51.574: INFO: stderr: "I0320 22:09:51.451854 3310 log.go:172] (0xc0000f2b00) (0xc00061dc20) Create stream\nI0320 22:09:51.451904 3310 log.go:172] (0xc0000f2b00) (0xc00061dc20) Stream added, broadcasting: 1\nI0320 22:09:51.458165 3310 log.go:172] (0xc0000f2b00) Reply frame received for 1\nI0320 22:09:51.458235 3310 log.go:172] (0xc0000f2b00) (0xc00073c820) Create stream\nI0320 22:09:51.458277 3310 log.go:172] (0xc0000f2b00) (0xc00073c820) Stream added, broadcasting: 3\nI0320 22:09:51.460736 3310 log.go:172] (0xc0000f2b00) Reply frame received for 3\nI0320 22:09:51.460784 3310 log.go:172] (0xc0000f2b00) (0xc0007a4000) Create stream\nI0320 22:09:51.460806 3310 log.go:172] (0xc0000f2b00) (0xc0007a4000) Stream added, broadcasting: 5\nI0320 22:09:51.461774 3310 log.go:172] (0xc0000f2b00) Reply frame received for 5\nI0320 22:09:51.556966 3310 log.go:172] (0xc0000f2b00) Data frame received for 5\nI0320 22:09:51.556996 3310 log.go:172] (0xc0007a4000) (5) Data frame handling\nI0320 22:09:51.557017 3310 log.go:172] (0xc0007a4000) (5) Data frame sent\n+ nslookup nodeport-service\nI0320 22:09:51.565735 3310 log.go:172] (0xc0000f2b00) Data frame received for 3\nI0320 22:09:51.565763 3310 log.go:172] (0xc00073c820) (3) Data frame handling\nI0320 22:09:51.565785 3310 log.go:172] (0xc00073c820) (3) Data frame sent\nI0320 22:09:51.566788 3310 log.go:172] (0xc0000f2b00) Data frame received for 3\nI0320 22:09:51.566815 3310 log.go:172] (0xc00073c820) (3) Data frame handling\nI0320 22:09:51.566846 3310 log.go:172] (0xc00073c820) (3) Data frame sent\nI0320 22:09:51.567516 3310 log.go:172] (0xc0000f2b00) Data frame received for 5\nI0320 22:09:51.567551 3310 log.go:172] (0xc0007a4000) (5) Data frame handling\nI0320 22:09:51.567589 3310 log.go:172] (0xc0000f2b00) Data frame received for 3\nI0320 22:09:51.567617 3310 log.go:172] (0xc00073c820) (3) Data frame handling\nI0320 22:09:51.570368 3310 log.go:172] (0xc0000f2b00) Data frame received for 1\nI0320 22:09:51.570391 3310 log.go:172] (0xc00061dc20) (1) Data frame handling\nI0320 22:09:51.570405 3310 log.go:172] (0xc00061dc20) (1) Data frame sent\nI0320 22:09:51.570427 3310 log.go:172] (0xc0000f2b00) (0xc00061dc20) Stream removed, broadcasting: 1\nI0320 22:09:51.570455 3310 log.go:172] (0xc0000f2b00) Go away received\nI0320 22:09:51.570745 3310 log.go:172] (0xc0000f2b00) (0xc00061dc20) Stream removed, broadcasting: 1\nI0320 22:09:51.570767 3310 log.go:172] (0xc0000f2b00) (0xc00073c820) Stream removed, broadcasting: 3\nI0320 22:09:51.570778 3310 log.go:172] (0xc0000f2b00) (0xc0007a4000) Stream removed, broadcasting: 5\n" Mar 20 22:09:51.574: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7535.svc.cluster.local\tcanonical name = externalsvc.services-7535.svc.cluster.local.\nName:\texternalsvc.services-7535.svc.cluster.local\nAddress: 10.99.124.244\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7535, will wait for the garbage collector to delete the pods Mar 20 22:09:51.634: INFO: Deleting ReplicationController externalsvc took: 6.18649ms Mar 20 22:09:51.734: INFO: Terminating ReplicationController externalsvc pods took: 100.235464ms Mar 20 22:09:59.359: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:09:59.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7535" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.433 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":198,"skipped":3021,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:09:59.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:09:59.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4775" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":199,"skipped":3043,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:09:59.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 20 22:09:59.524: INFO: Waiting up to 5m0s for pod "downward-api-963b7299-8f1f-45c2-8e9f-b031690a8741" in namespace "downward-api-6329" to be "success or failure" Mar 20 22:09:59.539: INFO: Pod "downward-api-963b7299-8f1f-45c2-8e9f-b031690a8741": Phase="Pending", Reason="", readiness=false. Elapsed: 15.30272ms Mar 20 22:10:01.556: INFO: Pod "downward-api-963b7299-8f1f-45c2-8e9f-b031690a8741": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032187408s Mar 20 22:10:03.560: INFO: Pod "downward-api-963b7299-8f1f-45c2-8e9f-b031690a8741": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036389761s STEP: Saw pod success Mar 20 22:10:03.560: INFO: Pod "downward-api-963b7299-8f1f-45c2-8e9f-b031690a8741" satisfied condition "success or failure" Mar 20 22:10:03.564: INFO: Trying to get logs from node jerma-worker2 pod downward-api-963b7299-8f1f-45c2-8e9f-b031690a8741 container dapi-container: STEP: delete the pod Mar 20 22:10:03.594: INFO: Waiting for pod downward-api-963b7299-8f1f-45c2-8e9f-b031690a8741 to disappear Mar 20 22:10:03.610: INFO: Pod downward-api-963b7299-8f1f-45c2-8e9f-b031690a8741 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:10:03.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6329" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3054,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:10:03.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-868ec13f-c965-451c-989f-91e7abec7099 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-868ec13f-c965-451c-989f-91e7abec7099 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:11:24.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9749" for this suite. • [SLOW TEST:80.477 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3064,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:11:24.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 22:11:24.163: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 20 22:11:29.180: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 20 22:11:29.180: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 20 22:11:29.222: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1192 /apis/apps/v1/namespaces/deployment-1192/deployments/test-cleanup-deployment e6a836e2-dbd4-41d3-b81f-4eac550f9367 1395745 1 2020-03-20 22:11:29 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b23bb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 20 22:11:29.246: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-1192 /apis/apps/v1/namespaces/deployment-1192/replicasets/test-cleanup-deployment-55ffc6b7b6 a3770740-7ab9-4ce3-ad0e-f0a9df4104bf 1395747 1 2020-03-20 22:11:29 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment e6a836e2-dbd4-41d3-b81f-4eac550f9367 0xc00265a0f7 0xc00265a0f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00265a168 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 20 22:11:29.246: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 20 22:11:29.246: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1192 /apis/apps/v1/namespaces/deployment-1192/replicasets/test-cleanup-controller caa4424f-59c6-4d88-97b1-6ea832481b59 1395746 1 2020-03-20 22:11:24 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment e6a836e2-dbd4-41d3-b81f-4eac550f9367 0xc00265a027 0xc00265a028}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00265a088 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 20 22:11:29.272: INFO: Pod "test-cleanup-controller-ttqkn" is available: &Pod{ObjectMeta:{test-cleanup-controller-ttqkn test-cleanup-controller- deployment-1192 /api/v1/namespaces/deployment-1192/pods/test-cleanup-controller-ttqkn e016e7d1-1914-4f04-9bf0-53f4eef7b520 1395735 0 2020-03-20 22:11:24 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller caa4424f-59c6-4d88-97b1-6ea832481b59 0xc0026c8fb7 0xc0026c8fb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmlb5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmlb5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmlb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 22:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 22:11:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 22:11:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 22:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.30,StartTime:2020-03-20 22:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-20 22:11:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://48a4f44b769cd08df580c38932f142fb539a3a6e7ef3945489e3a18a96904f78,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.30,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 20 22:11:29.272: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-7lnsb" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-7lnsb test-cleanup-deployment-55ffc6b7b6- deployment-1192 /api/v1/namespaces/deployment-1192/pods/test-cleanup-deployment-55ffc6b7b6-7lnsb 56a0ca5c-9a6e-41d4-9735-b67967720b75 1395753 0 2020-03-20 22:11:29 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 a3770740-7ab9-4ce3-ad0e-f0a9df4104bf 0xc0026c9147 0xc0026c9148}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmlb5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmlb5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmlb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 22:11:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:11:29.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1192" for this suite. • [SLOW TEST:5.215 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":202,"skipped":3127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:11:29.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1733 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 20 22:11:29.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-8799' Mar 20 22:11:29.547: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 20 22:11:29.547: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1738 Mar 20 22:11:33.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8799' Mar 20 22:11:33.897: INFO: stderr: "" Mar 20 22:11:33.897: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:11:33.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8799" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":203,"skipped":3177,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:11:33.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1897 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 20 22:11:33.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8924' Mar 20 22:11:34.043: INFO: stderr: "" Mar 20 22:11:34.044: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 20 22:11:39.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8924 -o json' Mar 20 22:11:39.189: INFO: stderr: "" Mar 20 22:11:39.189: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-20T22:11:34Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8924\",\n \"resourceVersion\": \"1395852\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8924/pods/e2e-test-httpd-pod\",\n \"uid\": \"3dec36aa-6b88-4d2c-b737-c53be504243b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-s625b\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-s625b\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-s625b\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-20T22:11:34Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-20T22:11:36Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-20T22:11:36Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-20T22:11:34Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://76cf902f2fae1eada3fb724a67281cf213178799ef095b5deaa84cc33ad60dd0\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-20T22:11:36Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.31\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.31\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-20T22:11:34Z\"\n }\n}\n" STEP: replace the image in the pod Mar 20 22:11:39.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8924' Mar 20 22:11:39.522: INFO: stderr: "" Mar 20 22:11:39.522: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1902 Mar 20 22:11:39.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8924' Mar 20 22:11:49.230: INFO: stderr: "" Mar 20 22:11:49.230: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:11:49.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8924" for this suite. • [SLOW TEST:15.333 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1893 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":204,"skipped":3181,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:11:49.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-8180e971-ea55-4c91-b9c7-32bb8893e6e6 in namespace container-probe-3553 Mar 20 22:11:53.316: INFO: Started pod busybox-8180e971-ea55-4c91-b9c7-32bb8893e6e6 in namespace container-probe-3553 STEP: checking the pod's current state and verifying that restartCount is present Mar 20 22:11:53.319: INFO: Initial restart count of pod busybox-8180e971-ea55-4c91-b9c7-32bb8893e6e6 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:15:53.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3553" for this suite. • [SLOW TEST:244.817 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3187,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:15:54.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 22:15:54.102: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:15:55.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7963" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":206,"skipped":3201,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:15:55.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 22:15:55.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 20 22:15:55.575: INFO: stderr: "" Mar 20 22:15:55.575: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:31:51Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:15:55.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6783" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":207,"skipped":3213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:15:55.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 22:15:55.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e4a4ad2-e9e9-401a-8b6b-25ba124b6816" in namespace "downward-api-1507" to be "success or failure" Mar 20 22:15:55.694: INFO: Pod "downwardapi-volume-9e4a4ad2-e9e9-401a-8b6b-25ba124b6816": Phase="Pending", Reason="", readiness=false. Elapsed: 26.510651ms Mar 20 22:15:57.698: INFO: Pod "downwardapi-volume-9e4a4ad2-e9e9-401a-8b6b-25ba124b6816": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0305909s Mar 20 22:15:59.703: INFO: Pod "downwardapi-volume-9e4a4ad2-e9e9-401a-8b6b-25ba124b6816": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035323863s STEP: Saw pod success Mar 20 22:15:59.703: INFO: Pod "downwardapi-volume-9e4a4ad2-e9e9-401a-8b6b-25ba124b6816" satisfied condition "success or failure" Mar 20 22:15:59.707: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9e4a4ad2-e9e9-401a-8b6b-25ba124b6816 container client-container: STEP: delete the pod Mar 20 22:15:59.739: INFO: Waiting for pod downwardapi-volume-9e4a4ad2-e9e9-401a-8b6b-25ba124b6816 to disappear Mar 20 22:15:59.744: INFO: Pod downwardapi-volume-9e4a4ad2-e9e9-401a-8b6b-25ba124b6816 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:15:59.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1507" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:15:59.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-2kh7 STEP: Creating a pod to test atomic-volume-subpath Mar 20 22:15:59.871: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2kh7" in namespace "subpath-1273" to be "success or failure" Mar 20 22:15:59.882: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.985778ms Mar 20 22:16:01.886: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014730699s Mar 20 22:16:03.890: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Running", Reason="", readiness=true. Elapsed: 4.018897561s Mar 20 22:16:05.894: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Running", Reason="", readiness=true. Elapsed: 6.022841085s Mar 20 22:16:07.898: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Running", Reason="", readiness=true. Elapsed: 8.026547953s Mar 20 22:16:09.902: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Running", Reason="", readiness=true. Elapsed: 10.030760182s Mar 20 22:16:11.906: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Running", Reason="", readiness=true. Elapsed: 12.034496103s Mar 20 22:16:13.910: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Running", Reason="", readiness=true. Elapsed: 14.038160221s Mar 20 22:16:15.914: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Running", Reason="", readiness=true. Elapsed: 16.042330292s Mar 20 22:16:17.917: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Running", Reason="", readiness=true. Elapsed: 18.045867506s Mar 20 22:16:19.921: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Running", Reason="", readiness=true. Elapsed: 20.049292368s Mar 20 22:16:21.924: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Running", Reason="", readiness=true. Elapsed: 22.052150407s Mar 20 22:16:23.928: INFO: Pod "pod-subpath-test-configmap-2kh7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056918907s STEP: Saw pod success Mar 20 22:16:23.928: INFO: Pod "pod-subpath-test-configmap-2kh7" satisfied condition "success or failure" Mar 20 22:16:23.933: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-2kh7 container test-container-subpath-configmap-2kh7: STEP: delete the pod Mar 20 22:16:23.979: INFO: Waiting for pod pod-subpath-test-configmap-2kh7 to disappear Mar 20 22:16:23.999: INFO: Pod pod-subpath-test-configmap-2kh7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-2kh7 Mar 20 22:16:23.999: INFO: Deleting pod "pod-subpath-test-configmap-2kh7" in namespace "subpath-1273" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:16:24.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1273" for this suite. • [SLOW TEST:24.274 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":209,"skipped":3321,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:16:24.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 22:16:24.148: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.91051ms) Mar 20 22:16:24.151: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.837754ms) Mar 20 22:16:24.154: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.752535ms) Mar 20 22:16:24.157: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.932186ms) Mar 20 22:16:24.160: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.78169ms) Mar 20 22:16:24.163: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.879587ms) Mar 20 22:16:24.165: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.847893ms) Mar 20 22:16:24.168: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.953076ms) Mar 20 22:16:24.171: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.794951ms) Mar 20 22:16:24.175: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.382258ms) Mar 20 22:16:24.178: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.172679ms) Mar 20 22:16:24.181: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.225764ms) Mar 20 22:16:24.184: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.799354ms) Mar 20 22:16:24.187: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.183355ms) Mar 20 22:16:24.190: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.263776ms) Mar 20 22:16:24.194: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.357ms) Mar 20 22:16:24.197: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.518434ms) Mar 20 22:16:24.201: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.396006ms) Mar 20 22:16:24.204: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.060844ms) Mar 20 22:16:24.207: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.857603ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:16:24.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6836" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":210,"skipped":3328,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:16:24.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 20 22:16:24.282: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:16:38.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4991" for this suite. • [SLOW TEST:14.580 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":211,"skipped":3335,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:16:38.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 22:16:38.845: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 20 22:16:40.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1903 create -f -' Mar 20 22:16:43.622: INFO: stderr: "" Mar 20 22:16:43.623: INFO: stdout: "e2e-test-crd-publish-openapi-6604-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 20 22:16:43.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1903 delete e2e-test-crd-publish-openapi-6604-crds test-foo' Mar 20 22:16:43.734: INFO: stderr: "" Mar 20 22:16:43.734: INFO: stdout: "e2e-test-crd-publish-openapi-6604-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 20 22:16:43.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1903 apply -f -' Mar 20 22:16:43.978: INFO: stderr: "" Mar 20 22:16:43.978: INFO: stdout: "e2e-test-crd-publish-openapi-6604-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 20 22:16:43.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1903 delete e2e-test-crd-publish-openapi-6604-crds test-foo' Mar 20 22:16:44.074: INFO: stderr: "" Mar 20 22:16:44.074: INFO: stdout: "e2e-test-crd-publish-openapi-6604-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 20 22:16:44.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1903 create -f -' Mar 20 22:16:44.316: INFO: rc: 1 Mar 20 22:16:44.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1903 apply -f -' Mar 20 22:16:44.538: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 20 22:16:44.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1903 create -f -' Mar 20 22:16:44.755: INFO: rc: 1 Mar 20 22:16:44.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1903 apply -f -' Mar 20 22:16:44.972: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 20 22:16:44.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6604-crds' Mar 20 22:16:45.187: INFO: stderr: "" Mar 20 22:16:45.187: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6604-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 20 22:16:45.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6604-crds.metadata' Mar 20 22:16:45.420: INFO: stderr: "" Mar 20 22:16:45.420: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6604-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 20 22:16:45.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6604-crds.spec' Mar 20 22:16:45.675: INFO: stderr: "" Mar 20 22:16:45.675: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6604-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 20 22:16:45.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6604-crds.spec.bars' Mar 20 22:16:45.901: INFO: stderr: "" Mar 20 22:16:45.901: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6604-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 20 22:16:45.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6604-crds.spec.bars2' Mar 20 22:16:46.134: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:16:49.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1903" for this suite. • [SLOW TEST:10.230 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":212,"skipped":3342,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:16:49.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 22:16:49.513: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 22:16:51.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339409, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339409, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339409, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339409, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 22:16:54.556: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:16:54.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5737" for this suite. STEP: Destroying namespace "webhook-5737-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.756 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":213,"skipped":3342,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:16:54.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-aa2bb8df-1d46-4bb2-84cd-6cdb87b91120 STEP: Creating a pod to test consume secrets Mar 20 22:16:54.871: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a222381d-9f55-4b89-a18f-0f801b864206" in namespace "projected-8915" to be "success or failure" Mar 20 22:16:54.893: INFO: Pod "pod-projected-secrets-a222381d-9f55-4b89-a18f-0f801b864206": Phase="Pending", Reason="", readiness=false. Elapsed: 21.501889ms Mar 20 22:16:56.903: INFO: Pod "pod-projected-secrets-a222381d-9f55-4b89-a18f-0f801b864206": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032103331s Mar 20 22:16:58.906: INFO: Pod "pod-projected-secrets-a222381d-9f55-4b89-a18f-0f801b864206": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035414523s STEP: Saw pod success Mar 20 22:16:58.907: INFO: Pod "pod-projected-secrets-a222381d-9f55-4b89-a18f-0f801b864206" satisfied condition "success or failure" Mar 20 22:16:58.910: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-a222381d-9f55-4b89-a18f-0f801b864206 container secret-volume-test: STEP: delete the pod Mar 20 22:16:58.947: INFO: Waiting for pod pod-projected-secrets-a222381d-9f55-4b89-a18f-0f801b864206 to disappear Mar 20 22:16:58.958: INFO: Pod pod-projected-secrets-a222381d-9f55-4b89-a18f-0f801b864206 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:16:58.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8915" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3367,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:16:58.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-cbfb2346-51fb-4592-9320-4bcbd7b97a8c STEP: Creating a pod to test consume secrets Mar 20 22:16:59.071: INFO: Waiting up to 5m0s for pod "pod-secrets-de1c405d-87aa-4ab9-b775-019721bd3c92" in namespace "secrets-9127" to be "success or failure" Mar 20 22:16:59.090: INFO: Pod "pod-secrets-de1c405d-87aa-4ab9-b775-019721bd3c92": Phase="Pending", Reason="", readiness=false. Elapsed: 19.092603ms Mar 20 22:17:01.094: INFO: Pod "pod-secrets-de1c405d-87aa-4ab9-b775-019721bd3c92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023318366s Mar 20 22:17:03.167: INFO: Pod "pod-secrets-de1c405d-87aa-4ab9-b775-019721bd3c92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095369186s STEP: Saw pod success Mar 20 22:17:03.167: INFO: Pod "pod-secrets-de1c405d-87aa-4ab9-b775-019721bd3c92" satisfied condition "success or failure" Mar 20 22:17:03.170: INFO: Trying to get logs from node jerma-worker pod pod-secrets-de1c405d-87aa-4ab9-b775-019721bd3c92 container secret-volume-test: STEP: delete the pod Mar 20 22:17:03.202: INFO: Waiting for pod pod-secrets-de1c405d-87aa-4ab9-b775-019721bd3c92 to disappear Mar 20 22:17:03.224: INFO: Pod pod-secrets-de1c405d-87aa-4ab9-b775-019721bd3c92 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:17:03.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9127" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3371,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:17:03.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 20 22:17:03.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3675 /api/v1/namespaces/watch-3675/configmaps/e2e-watch-test-resource-version d474322c-fcc0-4100-96a2-5974f0d980d8 1397102 0 2020-03-20 22:17:03 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 20 22:17:03.324: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3675 /api/v1/namespaces/watch-3675/configmaps/e2e-watch-test-resource-version d474322c-fcc0-4100-96a2-5974f0d980d8 1397103 0 2020-03-20 22:17:03 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:17:03.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3675" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":216,"skipped":3398,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:17:03.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7229 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7229 I0320 22:17:03.494193 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7229, replica count: 2 I0320 22:17:06.544623 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0320 22:17:09.544872 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 20 22:17:09.544: INFO: Creating new exec pod Mar 20 22:17:14.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7229 execpodjvtzz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 20 22:17:14.847: INFO: stderr: "I0320 22:17:14.747104 3765 log.go:172] (0xc00010aa50) (0xc0005afa40) Create stream\nI0320 22:17:14.747182 3765 log.go:172] (0xc00010aa50) (0xc0005afa40) Stream added, broadcasting: 1\nI0320 22:17:14.751042 3765 log.go:172] (0xc00010aa50) Reply frame received for 1\nI0320 22:17:14.751104 3765 log.go:172] (0xc00010aa50) (0xc000a7a000) Create stream\nI0320 22:17:14.751123 3765 log.go:172] (0xc00010aa50) (0xc000a7a000) Stream added, broadcasting: 3\nI0320 22:17:14.752198 3765 log.go:172] (0xc00010aa50) Reply frame received for 3\nI0320 22:17:14.752252 3765 log.go:172] (0xc00010aa50) (0xc0000c8000) Create stream\nI0320 22:17:14.752269 3765 log.go:172] (0xc00010aa50) (0xc0000c8000) Stream added, broadcasting: 5\nI0320 22:17:14.753407 3765 log.go:172] (0xc00010aa50) Reply frame received for 5\nI0320 22:17:14.839447 3765 log.go:172] (0xc00010aa50) Data frame received for 5\nI0320 22:17:14.839477 3765 log.go:172] (0xc0000c8000) (5) Data frame handling\nI0320 22:17:14.839490 3765 log.go:172] (0xc0000c8000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0320 22:17:14.839756 3765 log.go:172] (0xc00010aa50) Data frame received for 5\nI0320 22:17:14.839772 3765 log.go:172] (0xc0000c8000) (5) Data frame handling\nI0320 22:17:14.839780 3765 log.go:172] (0xc0000c8000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0320 22:17:14.840183 3765 log.go:172] (0xc00010aa50) Data frame received for 5\nI0320 22:17:14.840199 3765 log.go:172] (0xc0000c8000) (5) Data frame handling\nI0320 22:17:14.840855 3765 log.go:172] (0xc00010aa50) Data frame received for 3\nI0320 22:17:14.840871 3765 log.go:172] (0xc000a7a000) (3) Data frame handling\nI0320 22:17:14.843813 3765 log.go:172] (0xc00010aa50) Data frame received for 1\nI0320 22:17:14.843837 3765 log.go:172] (0xc0005afa40) (1) Data frame handling\nI0320 22:17:14.843850 3765 log.go:172] (0xc0005afa40) (1) Data frame sent\nI0320 22:17:14.843867 3765 log.go:172] (0xc00010aa50) (0xc0005afa40) Stream removed, broadcasting: 1\nI0320 22:17:14.843895 3765 log.go:172] (0xc00010aa50) Go away received\nI0320 22:17:14.844264 3765 log.go:172] (0xc00010aa50) (0xc0005afa40) Stream removed, broadcasting: 1\nI0320 22:17:14.844281 3765 log.go:172] (0xc00010aa50) (0xc000a7a000) Stream removed, broadcasting: 3\nI0320 22:17:14.844287 3765 log.go:172] (0xc00010aa50) (0xc0000c8000) Stream removed, broadcasting: 5\n" Mar 20 22:17:14.847: INFO: stdout: "" Mar 20 22:17:14.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7229 execpodjvtzz -- /bin/sh -x -c nc -zv -t -w 2 10.111.221.108 80' Mar 20 22:17:15.046: INFO: stderr: "I0320 22:17:14.977850 3788 log.go:172] (0xc000105550) (0xc00064bb80) Create stream\nI0320 22:17:14.977907 3788 log.go:172] (0xc000105550) (0xc00064bb80) Stream added, broadcasting: 1\nI0320 22:17:14.981200 3788 log.go:172] (0xc000105550) Reply frame received for 1\nI0320 22:17:14.981229 3788 log.go:172] (0xc000105550) (0xc00064bc20) Create stream\nI0320 22:17:14.981237 3788 log.go:172] (0xc000105550) (0xc00064bc20) Stream added, broadcasting: 3\nI0320 22:17:14.982118 3788 log.go:172] (0xc000105550) Reply frame received for 3\nI0320 22:17:14.982166 3788 log.go:172] (0xc000105550) (0xc000a0e000) Create stream\nI0320 22:17:14.982184 3788 log.go:172] (0xc000105550) (0xc000a0e000) Stream added, broadcasting: 5\nI0320 22:17:14.983292 3788 log.go:172] (0xc000105550) Reply frame received for 5\nI0320 22:17:15.039689 3788 log.go:172] (0xc000105550) Data frame received for 5\nI0320 22:17:15.039736 3788 log.go:172] (0xc000a0e000) (5) Data frame handling\nI0320 22:17:15.039750 3788 log.go:172] (0xc000a0e000) (5) Data frame sent\nI0320 22:17:15.039761 3788 log.go:172] (0xc000105550) Data frame received for 5\n+ nc -zv -t -w 2 10.111.221.108 80\nConnection to 10.111.221.108 80 port [tcp/http] succeeded!\nI0320 22:17:15.039787 3788 log.go:172] (0xc000105550) Data frame received for 3\nI0320 22:17:15.039824 3788 log.go:172] (0xc00064bc20) (3) Data frame handling\nI0320 22:17:15.039848 3788 log.go:172] (0xc000a0e000) (5) Data frame handling\nI0320 22:17:15.041382 3788 log.go:172] (0xc000105550) Data frame received for 1\nI0320 22:17:15.041406 3788 log.go:172] (0xc00064bb80) (1) Data frame handling\nI0320 22:17:15.041430 3788 log.go:172] (0xc00064bb80) (1) Data frame sent\nI0320 22:17:15.041459 3788 log.go:172] (0xc000105550) (0xc00064bb80) Stream removed, broadcasting: 1\nI0320 22:17:15.041578 3788 log.go:172] (0xc000105550) Go away received\nI0320 22:17:15.041880 3788 log.go:172] (0xc000105550) (0xc00064bb80) Stream removed, broadcasting: 1\nI0320 22:17:15.041899 3788 log.go:172] (0xc000105550) (0xc00064bc20) Stream removed, broadcasting: 3\nI0320 22:17:15.041915 3788 log.go:172] (0xc000105550) (0xc000a0e000) Stream removed, broadcasting: 5\n" Mar 20 22:17:15.046: INFO: stdout: "" Mar 20 22:17:15.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7229 execpodjvtzz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31549' Mar 20 22:17:15.255: INFO: stderr: "I0320 22:17:15.174387 3810 log.go:172] (0xc00010ad10) (0xc000719b80) Create stream\nI0320 22:17:15.174447 3810 log.go:172] (0xc00010ad10) (0xc000719b80) Stream added, broadcasting: 1\nI0320 22:17:15.177871 3810 log.go:172] (0xc00010ad10) Reply frame received for 1\nI0320 22:17:15.177922 3810 log.go:172] (0xc00010ad10) (0xc000ae6000) Create stream\nI0320 22:17:15.177935 3810 log.go:172] (0xc00010ad10) (0xc000ae6000) Stream added, broadcasting: 3\nI0320 22:17:15.178930 3810 log.go:172] (0xc00010ad10) Reply frame received for 3\nI0320 22:17:15.178955 3810 log.go:172] (0xc00010ad10) (0xc0006d45a0) Create stream\nI0320 22:17:15.178962 3810 log.go:172] (0xc00010ad10) (0xc0006d45a0) Stream added, broadcasting: 5\nI0320 22:17:15.179812 3810 log.go:172] (0xc00010ad10) Reply frame received for 5\nI0320 22:17:15.250403 3810 log.go:172] (0xc00010ad10) Data frame received for 5\nI0320 22:17:15.250433 3810 log.go:172] (0xc0006d45a0) (5) Data frame handling\nI0320 22:17:15.250445 3810 log.go:172] (0xc0006d45a0) (5) Data frame sent\nI0320 22:17:15.250452 3810 log.go:172] (0xc00010ad10) Data frame received for 5\nI0320 22:17:15.250460 3810 log.go:172] (0xc0006d45a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31549\nConnection to 172.17.0.10 31549 port [tcp/31549] succeeded!\nI0320 22:17:15.250516 3810 log.go:172] (0xc00010ad10) Data frame received for 3\nI0320 22:17:15.250551 3810 log.go:172] (0xc000ae6000) (3) Data frame handling\nI0320 22:17:15.250588 3810 log.go:172] (0xc0006d45a0) (5) Data frame sent\nI0320 22:17:15.250634 3810 log.go:172] (0xc00010ad10) Data frame received for 5\nI0320 22:17:15.250648 3810 log.go:172] (0xc0006d45a0) (5) Data frame handling\nI0320 22:17:15.252025 3810 log.go:172] (0xc00010ad10) Data frame received for 1\nI0320 22:17:15.252044 3810 log.go:172] (0xc000719b80) (1) Data frame handling\nI0320 22:17:15.252060 3810 log.go:172] (0xc000719b80) (1) Data frame sent\nI0320 22:17:15.252074 3810 log.go:172] (0xc00010ad10) (0xc000719b80) Stream removed, broadcasting: 1\nI0320 22:17:15.252305 3810 log.go:172] (0xc00010ad10) Go away received\nI0320 22:17:15.252395 3810 log.go:172] (0xc00010ad10) (0xc000719b80) Stream removed, broadcasting: 1\nI0320 22:17:15.252411 3810 log.go:172] (0xc00010ad10) (0xc000ae6000) Stream removed, broadcasting: 3\nI0320 22:17:15.252421 3810 log.go:172] (0xc00010ad10) (0xc0006d45a0) Stream removed, broadcasting: 5\n" Mar 20 22:17:15.255: INFO: stdout: "" Mar 20 22:17:15.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7229 execpodjvtzz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31549' Mar 20 22:17:15.460: INFO: stderr: "I0320 22:17:15.371994 3833 log.go:172] (0xc000b39130) (0xc000b2e640) Create stream\nI0320 22:17:15.372054 3833 log.go:172] (0xc000b39130) (0xc000b2e640) Stream added, broadcasting: 1\nI0320 22:17:15.374359 3833 log.go:172] (0xc000b39130) Reply frame received for 1\nI0320 22:17:15.374404 3833 log.go:172] (0xc000b39130) (0xc00075a640) Create stream\nI0320 22:17:15.374421 3833 log.go:172] (0xc000b39130) (0xc00075a640) Stream added, broadcasting: 3\nI0320 22:17:15.375203 3833 log.go:172] (0xc000b39130) Reply frame received for 3\nI0320 22:17:15.375297 3833 log.go:172] (0xc000b39130) (0xc000b2e6e0) Create stream\nI0320 22:17:15.375315 3833 log.go:172] (0xc000b39130) (0xc000b2e6e0) Stream added, broadcasting: 5\nI0320 22:17:15.376082 3833 log.go:172] (0xc000b39130) Reply frame received for 5\nI0320 22:17:15.453051 3833 log.go:172] (0xc000b39130) Data frame received for 3\nI0320 22:17:15.453102 3833 log.go:172] (0xc00075a640) (3) Data frame handling\nI0320 22:17:15.453237 3833 log.go:172] (0xc000b39130) Data frame received for 5\nI0320 22:17:15.453262 3833 log.go:172] (0xc000b2e6e0) (5) Data frame handling\nI0320 22:17:15.453277 3833 log.go:172] (0xc000b2e6e0) (5) Data frame sent\nI0320 22:17:15.453290 3833 log.go:172] (0xc000b39130) Data frame received for 5\nI0320 22:17:15.453300 3833 log.go:172] (0xc000b2e6e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31549\nConnection to 172.17.0.8 31549 port [tcp/31549] succeeded!\nI0320 22:17:15.455205 3833 log.go:172] (0xc000b39130) Data frame received for 1\nI0320 22:17:15.455230 3833 log.go:172] (0xc000b2e640) (1) Data frame handling\nI0320 22:17:15.455248 3833 log.go:172] (0xc000b2e640) (1) Data frame sent\nI0320 22:17:15.455426 3833 log.go:172] (0xc000b39130) (0xc000b2e640) Stream removed, broadcasting: 1\nI0320 22:17:15.455561 3833 log.go:172] (0xc000b39130) Go away received\nI0320 22:17:15.455984 3833 log.go:172] (0xc000b39130) (0xc000b2e640) Stream removed, broadcasting: 1\nI0320 22:17:15.456009 3833 log.go:172] (0xc000b39130) (0xc00075a640) Stream removed, broadcasting: 3\nI0320 22:17:15.456027 3833 log.go:172] (0xc000b39130) (0xc000b2e6e0) Stream removed, broadcasting: 5\n" Mar 20 22:17:15.460: INFO: stdout: "" Mar 20 22:17:15.460: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:17:15.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7229" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.276 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":217,"skipped":3405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:17:15.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-99194bde-f3bc-4617-9f65-19311490f321 STEP: Creating a pod to test consume configMaps Mar 20 22:17:15.699: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec045921-288c-403e-ad92-a3e271d0b1c4" in namespace "configmap-5757" to be "success or failure" Mar 20 22:17:15.708: INFO: Pod "pod-configmaps-ec045921-288c-403e-ad92-a3e271d0b1c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.728407ms Mar 20 22:17:17.759: INFO: Pod "pod-configmaps-ec045921-288c-403e-ad92-a3e271d0b1c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059931088s Mar 20 22:17:19.763: INFO: Pod "pod-configmaps-ec045921-288c-403e-ad92-a3e271d0b1c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063752751s STEP: Saw pod success Mar 20 22:17:19.763: INFO: Pod "pod-configmaps-ec045921-288c-403e-ad92-a3e271d0b1c4" satisfied condition "success or failure" Mar 20 22:17:19.766: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ec045921-288c-403e-ad92-a3e271d0b1c4 container configmap-volume-test: STEP: delete the pod Mar 20 22:17:19.802: INFO: Waiting for pod pod-configmaps-ec045921-288c-403e-ad92-a3e271d0b1c4 to disappear Mar 20 22:17:19.815: INFO: Pod pod-configmaps-ec045921-288c-403e-ad92-a3e271d0b1c4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:17:19.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5757" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3447,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:17:19.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 22:17:20.454: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 22:17:22.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339440, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339440, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339440, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339440, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 22:17:25.532: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:17:25.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9779" for this suite. STEP: Destroying namespace "webhook-9779-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.852 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":219,"skipped":3455,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:17:25.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-098f55d0-d8f0-431a-be71-e837d7373947 STEP: Creating a pod to test consume secrets Mar 20 22:17:25.771: INFO: Waiting up to 5m0s for pod "pod-secrets-3dbbd1ed-efd5-4d2e-a042-776309278b56" in namespace "secrets-3250" to be "success or failure" Mar 20 22:17:25.780: INFO: Pod "pod-secrets-3dbbd1ed-efd5-4d2e-a042-776309278b56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.661422ms Mar 20 22:17:27.816: INFO: Pod "pod-secrets-3dbbd1ed-efd5-4d2e-a042-776309278b56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044743596s Mar 20 22:17:29.820: INFO: Pod "pod-secrets-3dbbd1ed-efd5-4d2e-a042-776309278b56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048687177s STEP: Saw pod success Mar 20 22:17:29.820: INFO: Pod "pod-secrets-3dbbd1ed-efd5-4d2e-a042-776309278b56" satisfied condition "success or failure" Mar 20 22:17:29.826: INFO: Trying to get logs from node jerma-worker pod pod-secrets-3dbbd1ed-efd5-4d2e-a042-776309278b56 container secret-volume-test: STEP: delete the pod Mar 20 22:17:30.263: INFO: Waiting for pod pod-secrets-3dbbd1ed-efd5-4d2e-a042-776309278b56 to disappear Mar 20 22:17:30.269: INFO: Pod pod-secrets-3dbbd1ed-efd5-4d2e-a042-776309278b56 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:17:30.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3250" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3461,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:17:30.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:17:36.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8325" for this suite. STEP: Destroying namespace "nsdeletetest-7273" for this suite. Mar 20 22:17:36.622: INFO: Namespace nsdeletetest-7273 was already deleted STEP: Destroying namespace "nsdeletetest-988" for this suite. • [SLOW TEST:6.370 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":221,"skipped":3476,"failed":0} S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:17:36.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-5d2dc236-873f-4561-a47d-460ca7cb24a5 STEP: Creating a pod to test consume secrets Mar 20 22:17:36.774: INFO: Waiting up to 5m0s for pod "pod-secrets-62b23440-9366-4426-ba48-54b3aaeed841" in namespace "secrets-5075" to be "success or failure" Mar 20 22:17:36.795: INFO: Pod "pod-secrets-62b23440-9366-4426-ba48-54b3aaeed841": Phase="Pending", Reason="", readiness=false. Elapsed: 20.458066ms Mar 20 22:17:38.799: INFO: Pod "pod-secrets-62b23440-9366-4426-ba48-54b3aaeed841": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024572698s Mar 20 22:17:40.802: INFO: Pod "pod-secrets-62b23440-9366-4426-ba48-54b3aaeed841": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028340289s STEP: Saw pod success Mar 20 22:17:40.803: INFO: Pod "pod-secrets-62b23440-9366-4426-ba48-54b3aaeed841" satisfied condition "success or failure" Mar 20 22:17:40.805: INFO: Trying to get logs from node jerma-worker pod pod-secrets-62b23440-9366-4426-ba48-54b3aaeed841 container secret-volume-test: STEP: delete the pod Mar 20 22:17:40.893: INFO: Waiting for pod pod-secrets-62b23440-9366-4426-ba48-54b3aaeed841 to disappear Mar 20 22:17:40.924: INFO: Pod pod-secrets-62b23440-9366-4426-ba48-54b3aaeed841 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:17:40.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5075" for this suite. STEP: Destroying namespace "secret-namespace-6938" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3477,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:17:40.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 22:17:41.775: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 22:17:43.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339461, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339461, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339461, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339461, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 22:17:46.824: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 22:17:46.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9593-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:17:47.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7121" for this suite. STEP: Destroying namespace "webhook-7121-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.051 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":223,"skipped":3512,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:17:48.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:17:53.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2115" for this suite. • [SLOW TEST:5.706 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":224,"skipped":3547,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:17:53.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-4ee9b4fc-53a3-4900-8d13-c144a08edf37 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:17:53.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1076" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":225,"skipped":3556,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:17:53.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 20 22:18:00.402: INFO: Successfully updated pod "adopt-release-6bp6h" STEP: Checking that the Job readopts the Pod Mar 20 22:18:00.402: INFO: Waiting up to 15m0s for pod "adopt-release-6bp6h" in namespace "job-5128" to be "adopted" Mar 20 22:18:00.461: INFO: Pod "adopt-release-6bp6h": Phase="Running", Reason="", readiness=true. Elapsed: 58.712731ms Mar 20 22:18:02.465: INFO: Pod "adopt-release-6bp6h": Phase="Running", Reason="", readiness=true. Elapsed: 2.062888979s Mar 20 22:18:02.465: INFO: Pod "adopt-release-6bp6h" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 20 22:18:02.973: INFO: Successfully updated pod "adopt-release-6bp6h" STEP: Checking that the Job releases the Pod Mar 20 22:18:02.974: INFO: Waiting up to 15m0s for pod "adopt-release-6bp6h" in namespace "job-5128" to be "released" Mar 20 22:18:02.977: INFO: Pod "adopt-release-6bp6h": Phase="Running", Reason="", readiness=true. Elapsed: 3.586049ms Mar 20 22:18:04.981: INFO: Pod "adopt-release-6bp6h": Phase="Running", Reason="", readiness=true. Elapsed: 2.007233965s Mar 20 22:18:04.981: INFO: Pod "adopt-release-6bp6h" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:18:04.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5128" for this suite. • [SLOW TEST:11.185 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":226,"skipped":3592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:18:04.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 22:18:05.646: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 22:18:07.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339485, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339485, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339485, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339485, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 22:18:10.743: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 20 22:18:10.771: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:18:10.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9084" for this suite. STEP: Destroying namespace "webhook-9084-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.895 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":227,"skipped":3640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:18:10.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 22:18:11.751: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 22:18:13.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339491, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339491, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339491, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339491, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 22:18:16.805: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:18:16.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2175" for this suite. STEP: Destroying namespace "webhook-2175-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.120 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":228,"skipped":3702,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:18:17.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 22:18:17.110: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ebb60ab-312b-45aa-bce0-6c7e15806997" in namespace "downward-api-1" to be "success or failure" Mar 20 22:18:17.114: INFO: Pod "downwardapi-volume-3ebb60ab-312b-45aa-bce0-6c7e15806997": Phase="Pending", Reason="", readiness=false. Elapsed: 4.676499ms Mar 20 22:18:19.118: INFO: Pod "downwardapi-volume-3ebb60ab-312b-45aa-bce0-6c7e15806997": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008902375s Mar 20 22:18:21.127: INFO: Pod "downwardapi-volume-3ebb60ab-312b-45aa-bce0-6c7e15806997": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017394537s STEP: Saw pod success Mar 20 22:18:21.127: INFO: Pod "downwardapi-volume-3ebb60ab-312b-45aa-bce0-6c7e15806997" satisfied condition "success or failure" Mar 20 22:18:21.130: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3ebb60ab-312b-45aa-bce0-6c7e15806997 container client-container: STEP: delete the pod Mar 20 22:18:21.143: INFO: Waiting for pod downwardapi-volume-3ebb60ab-312b-45aa-bce0-6c7e15806997 to disappear Mar 20 22:18:21.148: INFO: Pod downwardapi-volume-3ebb60ab-312b-45aa-bce0-6c7e15806997 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:18:21.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3711,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:18:21.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9104 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 20 22:18:21.255: INFO: Found 0 stateful pods, waiting for 3 Mar 20 22:18:31.260: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 20 22:18:31.261: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 20 22:18:31.261: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 20 22:18:31.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9104 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 20 22:18:31.531: INFO: stderr: "I0320 22:18:31.404245 3853 log.go:172] (0xc0000f4e70) (0xc00066c000) Create stream\nI0320 22:18:31.404323 3853 log.go:172] (0xc0000f4e70) (0xc00066c000) Stream added, broadcasting: 1\nI0320 22:18:31.407684 3853 log.go:172] (0xc0000f4e70) Reply frame received for 1\nI0320 22:18:31.407745 3853 log.go:172] (0xc0000f4e70) (0xc00065fae0) Create stream\nI0320 22:18:31.407774 3853 log.go:172] (0xc0000f4e70) (0xc00065fae0) Stream added, broadcasting: 3\nI0320 22:18:31.408788 3853 log.go:172] (0xc0000f4e70) Reply frame received for 3\nI0320 22:18:31.408823 3853 log.go:172] (0xc0000f4e70) (0xc000952000) Create stream\nI0320 22:18:31.408838 3853 log.go:172] (0xc0000f4e70) (0xc000952000) Stream added, broadcasting: 5\nI0320 22:18:31.410069 3853 log.go:172] (0xc0000f4e70) Reply frame received for 5\nI0320 22:18:31.499035 3853 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0320 22:18:31.499065 3853 log.go:172] (0xc000952000) (5) Data frame handling\nI0320 22:18:31.499084 3853 log.go:172] (0xc000952000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0320 22:18:31.526602 3853 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0320 22:18:31.526639 3853 log.go:172] (0xc000952000) (5) Data frame handling\nI0320 22:18:31.526662 3853 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0320 22:18:31.526672 3853 log.go:172] (0xc00065fae0) (3) Data frame handling\nI0320 22:18:31.526685 3853 log.go:172] (0xc00065fae0) (3) Data frame sent\nI0320 22:18:31.526694 3853 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0320 22:18:31.526701 3853 log.go:172] (0xc00065fae0) (3) Data frame handling\nI0320 22:18:31.528352 3853 log.go:172] (0xc0000f4e70) Data frame received for 1\nI0320 22:18:31.528373 3853 log.go:172] (0xc00066c000) (1) Data frame handling\nI0320 22:18:31.528394 3853 log.go:172] (0xc00066c000) (1) Data frame sent\nI0320 22:18:31.528409 3853 log.go:172] (0xc0000f4e70) (0xc00066c000) Stream removed, broadcasting: 1\nI0320 22:18:31.528567 3853 log.go:172] (0xc0000f4e70) Go away received\nI0320 22:18:31.528725 3853 log.go:172] (0xc0000f4e70) (0xc00066c000) Stream removed, broadcasting: 1\nI0320 22:18:31.528744 3853 log.go:172] (0xc0000f4e70) (0xc00065fae0) Stream removed, broadcasting: 3\nI0320 22:18:31.528753 3853 log.go:172] (0xc0000f4e70) (0xc000952000) Stream removed, broadcasting: 5\n" Mar 20 22:18:31.532: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 20 22:18:31.532: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 20 22:18:41.561: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 20 22:18:51.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9104 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:18:52.040: INFO: stderr: "I0320 22:18:51.953702 3876 log.go:172] (0xc0000f4dc0) (0xc0006bfd60) Create stream\nI0320 22:18:51.953772 3876 log.go:172] (0xc0000f4dc0) (0xc0006bfd60) Stream added, broadcasting: 1\nI0320 22:18:51.956328 3876 log.go:172] (0xc0000f4dc0) Reply frame received for 1\nI0320 22:18:51.956383 3876 log.go:172] (0xc0000f4dc0) (0xc00097c000) Create stream\nI0320 22:18:51.956398 3876 log.go:172] (0xc0000f4dc0) (0xc00097c000) Stream added, broadcasting: 3\nI0320 22:18:51.957575 3876 log.go:172] (0xc0000f4dc0) Reply frame received for 3\nI0320 22:18:51.957615 3876 log.go:172] (0xc0000f4dc0) (0xc0002ed4a0) Create stream\nI0320 22:18:51.957626 3876 log.go:172] (0xc0000f4dc0) (0xc0002ed4a0) Stream added, broadcasting: 5\nI0320 22:18:51.958537 3876 log.go:172] (0xc0000f4dc0) Reply frame received for 5\nI0320 22:18:52.032567 3876 log.go:172] (0xc0000f4dc0) Data frame received for 3\nI0320 22:18:52.032602 3876 log.go:172] (0xc00097c000) (3) Data frame handling\nI0320 22:18:52.032627 3876 log.go:172] (0xc00097c000) (3) Data frame sent\nI0320 22:18:52.032639 3876 log.go:172] (0xc0000f4dc0) Data frame received for 3\nI0320 22:18:52.032649 3876 log.go:172] (0xc00097c000) (3) Data frame handling\nI0320 22:18:52.032714 3876 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0320 22:18:52.032732 3876 log.go:172] (0xc0002ed4a0) (5) Data frame handling\nI0320 22:18:52.032746 3876 log.go:172] (0xc0002ed4a0) (5) Data frame sent\nI0320 22:18:52.032752 3876 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0320 22:18:52.032756 3876 log.go:172] (0xc0002ed4a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0320 22:18:52.034890 3876 log.go:172] (0xc0000f4dc0) Data frame received for 1\nI0320 22:18:52.034923 3876 log.go:172] (0xc0006bfd60) (1) Data frame handling\nI0320 22:18:52.034945 3876 log.go:172] (0xc0006bfd60) (1) Data frame sent\nI0320 22:18:52.034971 3876 log.go:172] (0xc0000f4dc0) (0xc0006bfd60) Stream removed, broadcasting: 1\nI0320 22:18:52.034998 3876 log.go:172] (0xc0000f4dc0) Go away received\nI0320 22:18:52.035509 3876 log.go:172] (0xc0000f4dc0) (0xc0006bfd60) Stream removed, broadcasting: 1\nI0320 22:18:52.035537 3876 log.go:172] (0xc0000f4dc0) (0xc00097c000) Stream removed, broadcasting: 3\nI0320 22:18:52.035551 3876 log.go:172] (0xc0000f4dc0) (0xc0002ed4a0) Stream removed, broadcasting: 5\n" Mar 20 22:18:52.040: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 20 22:18:52.040: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 20 22:19:02.071: INFO: Waiting for StatefulSet statefulset-9104/ss2 to complete update Mar 20 22:19:02.071: INFO: Waiting for Pod statefulset-9104/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 20 22:19:02.071: INFO: Waiting for Pod statefulset-9104/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 20 22:19:12.079: INFO: Waiting for StatefulSet statefulset-9104/ss2 to complete update Mar 20 22:19:12.079: INFO: Waiting for Pod statefulset-9104/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 20 22:19:22.078: INFO: Waiting for StatefulSet statefulset-9104/ss2 to complete update STEP: Rolling back to a previous revision Mar 20 22:19:32.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9104 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 20 22:19:32.353: INFO: stderr: "I0320 22:19:32.213038 3899 log.go:172] (0xc0000f4370) (0xc0002254a0) Create stream\nI0320 22:19:32.213094 3899 log.go:172] (0xc0000f4370) (0xc0002254a0) Stream added, broadcasting: 1\nI0320 22:19:32.215813 3899 log.go:172] (0xc0000f4370) Reply frame received for 1\nI0320 22:19:32.215861 3899 log.go:172] (0xc0000f4370) (0xc000972000) Create stream\nI0320 22:19:32.215875 3899 log.go:172] (0xc0000f4370) (0xc000972000) Stream added, broadcasting: 3\nI0320 22:19:32.216904 3899 log.go:172] (0xc0000f4370) Reply frame received for 3\nI0320 22:19:32.217045 3899 log.go:172] (0xc0000f4370) (0xc0006aba40) Create stream\nI0320 22:19:32.217061 3899 log.go:172] (0xc0000f4370) (0xc0006aba40) Stream added, broadcasting: 5\nI0320 22:19:32.218213 3899 log.go:172] (0xc0000f4370) Reply frame received for 5\nI0320 22:19:32.315717 3899 log.go:172] (0xc0000f4370) Data frame received for 5\nI0320 22:19:32.315744 3899 log.go:172] (0xc0006aba40) (5) Data frame handling\nI0320 22:19:32.315762 3899 log.go:172] (0xc0006aba40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0320 22:19:32.345332 3899 log.go:172] (0xc0000f4370) Data frame received for 3\nI0320 22:19:32.345401 3899 log.go:172] (0xc000972000) (3) Data frame handling\nI0320 22:19:32.345441 3899 log.go:172] (0xc000972000) (3) Data frame sent\nI0320 22:19:32.345531 3899 log.go:172] (0xc0000f4370) Data frame received for 3\nI0320 22:19:32.345628 3899 log.go:172] (0xc000972000) (3) Data frame handling\nI0320 22:19:32.345768 3899 log.go:172] (0xc0000f4370) Data frame received for 5\nI0320 22:19:32.345789 3899 log.go:172] (0xc0006aba40) (5) Data frame handling\nI0320 22:19:32.347970 3899 log.go:172] (0xc0000f4370) Data frame received for 1\nI0320 22:19:32.347984 3899 log.go:172] (0xc0002254a0) (1) Data frame handling\nI0320 22:19:32.347991 3899 log.go:172] (0xc0002254a0) (1) Data frame sent\nI0320 22:19:32.347999 3899 log.go:172] (0xc0000f4370) (0xc0002254a0) Stream removed, broadcasting: 1\nI0320 22:19:32.348008 3899 log.go:172] (0xc0000f4370) Go away received\nI0320 22:19:32.348482 3899 log.go:172] (0xc0000f4370) (0xc0002254a0) Stream removed, broadcasting: 1\nI0320 22:19:32.348511 3899 log.go:172] (0xc0000f4370) (0xc000972000) Stream removed, broadcasting: 3\nI0320 22:19:32.348524 3899 log.go:172] (0xc0000f4370) (0xc0006aba40) Stream removed, broadcasting: 5\n" Mar 20 22:19:32.353: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 20 22:19:32.353: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 20 22:19:42.385: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 20 22:19:52.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9104 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 20 22:19:52.649: INFO: stderr: "I0320 22:19:52.553191 3922 log.go:172] (0xc000a5a000) (0xc0009b0000) Create stream\nI0320 22:19:52.553239 3922 log.go:172] (0xc000a5a000) (0xc0009b0000) Stream added, broadcasting: 1\nI0320 22:19:52.556014 3922 log.go:172] (0xc000a5a000) Reply frame received for 1\nI0320 22:19:52.556051 3922 log.go:172] (0xc000a5a000) (0xc000aba000) Create stream\nI0320 22:19:52.556062 3922 log.go:172] (0xc000a5a000) (0xc000aba000) Stream added, broadcasting: 3\nI0320 22:19:52.558077 3922 log.go:172] (0xc000a5a000) Reply frame received for 3\nI0320 22:19:52.558141 3922 log.go:172] (0xc000a5a000) (0xc000aba0a0) Create stream\nI0320 22:19:52.558159 3922 log.go:172] (0xc000a5a000) (0xc000aba0a0) Stream added, broadcasting: 5\nI0320 22:19:52.562389 3922 log.go:172] (0xc000a5a000) Reply frame received for 5\nI0320 22:19:52.642833 3922 log.go:172] (0xc000a5a000) Data frame received for 3\nI0320 22:19:52.642972 3922 log.go:172] (0xc000aba000) (3) Data frame handling\nI0320 22:19:52.643005 3922 log.go:172] (0xc000aba000) (3) Data frame sent\nI0320 22:19:52.643019 3922 log.go:172] (0xc000a5a000) Data frame received for 3\nI0320 22:19:52.643036 3922 log.go:172] (0xc000aba000) (3) Data frame handling\nI0320 22:19:52.643065 3922 log.go:172] (0xc000a5a000) Data frame received for 5\nI0320 22:19:52.643091 3922 log.go:172] (0xc000aba0a0) (5) Data frame handling\nI0320 22:19:52.643113 3922 log.go:172] (0xc000aba0a0) (5) Data frame sent\nI0320 22:19:52.643149 3922 log.go:172] (0xc000a5a000) Data frame received for 5\nI0320 22:19:52.643163 3922 log.go:172] (0xc000aba0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0320 22:19:52.644544 3922 log.go:172] (0xc000a5a000) Data frame received for 1\nI0320 22:19:52.644577 3922 log.go:172] (0xc0009b0000) (1) Data frame handling\nI0320 22:19:52.644615 3922 log.go:172] (0xc0009b0000) (1) Data frame sent\nI0320 22:19:52.644637 3922 log.go:172] (0xc000a5a000) (0xc0009b0000) Stream removed, broadcasting: 1\nI0320 22:19:52.644654 3922 log.go:172] (0xc000a5a000) Go away received\nI0320 22:19:52.645237 3922 log.go:172] (0xc000a5a000) (0xc0009b0000) Stream removed, broadcasting: 1\nI0320 22:19:52.645268 3922 log.go:172] (0xc000a5a000) (0xc000aba000) Stream removed, broadcasting: 3\nI0320 22:19:52.645280 3922 log.go:172] (0xc000a5a000) (0xc000aba0a0) Stream removed, broadcasting: 5\n" Mar 20 22:19:52.649: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 20 22:19:52.649: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 20 22:20:02.686: INFO: Waiting for StatefulSet statefulset-9104/ss2 to complete update Mar 20 22:20:02.686: INFO: Waiting for Pod statefulset-9104/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 20 22:20:02.686: INFO: Waiting for Pod statefulset-9104/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 20 22:20:12.693: INFO: Waiting for StatefulSet statefulset-9104/ss2 to complete update Mar 20 22:20:12.694: INFO: Waiting for Pod statefulset-9104/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 20 22:20:22.695: INFO: Waiting for StatefulSet statefulset-9104/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 20 22:20:32.696: INFO: Deleting all statefulset in ns statefulset-9104 Mar 20 22:20:32.699: INFO: Scaling statefulset ss2 to 0 Mar 20 22:21:02.726: INFO: Waiting for statefulset status.replicas updated to 0 Mar 20 22:21:02.729: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:21:02.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9104" for this suite. • [SLOW TEST:161.597 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":230,"skipped":3739,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:21:02.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-61a594e7-57f7-4284-8886-d4cc8bba5181 STEP: Creating a pod to test consume secrets Mar 20 22:21:02.835: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0d1a4515-b585-4315-be26-5738b37941ab" in namespace "projected-3220" to be "success or failure" Mar 20 22:21:02.855: INFO: Pod "pod-projected-secrets-0d1a4515-b585-4315-be26-5738b37941ab": Phase="Pending", Reason="", readiness=false. Elapsed: 20.110056ms Mar 20 22:21:04.858: INFO: Pod "pod-projected-secrets-0d1a4515-b585-4315-be26-5738b37941ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023517742s Mar 20 22:21:06.863: INFO: Pod "pod-projected-secrets-0d1a4515-b585-4315-be26-5738b37941ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028065138s STEP: Saw pod success Mar 20 22:21:06.863: INFO: Pod "pod-projected-secrets-0d1a4515-b585-4315-be26-5738b37941ab" satisfied condition "success or failure" Mar 20 22:21:06.866: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-0d1a4515-b585-4315-be26-5738b37941ab container projected-secret-volume-test: STEP: delete the pod Mar 20 22:21:06.911: INFO: Waiting for pod pod-projected-secrets-0d1a4515-b585-4315-be26-5738b37941ab to disappear Mar 20 22:21:06.953: INFO: Pod pod-projected-secrets-0d1a4515-b585-4315-be26-5738b37941ab no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:21:06.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3220" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3741,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:21:06.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 20 22:21:07.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8071' Mar 20 22:21:07.362: INFO: stderr: "" Mar 20 22:21:07.362: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 20 22:21:08.470: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 22:21:08.470: INFO: Found 0 / 1 Mar 20 22:21:09.367: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 22:21:09.367: INFO: Found 0 / 1 Mar 20 22:21:10.367: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 22:21:10.367: INFO: Found 0 / 1 Mar 20 22:21:11.367: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 22:21:11.367: INFO: Found 1 / 1 Mar 20 22:21:11.367: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 20 22:21:11.370: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 22:21:11.370: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 20 22:21:11.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-c6dss --namespace=kubectl-8071 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 20 22:21:11.476: INFO: stderr: "" Mar 20 22:21:11.476: INFO: stdout: "pod/agnhost-master-c6dss patched\n" STEP: checking annotations Mar 20 22:21:11.485: INFO: Selector matched 1 pods for map[app:agnhost] Mar 20 22:21:11.485: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:21:11.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8071" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":232,"skipped":3757,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:21:11.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:21:22.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4968" for this suite. • [SLOW TEST:11.114 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":233,"skipped":3758,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:21:22.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 20 22:21:30.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 20 22:21:30.758: INFO: Pod pod-with-poststart-exec-hook still exists Mar 20 22:21:32.758: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 20 22:21:32.762: INFO: Pod pod-with-poststart-exec-hook still exists Mar 20 22:21:34.758: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 20 22:21:34.763: INFO: Pod pod-with-poststart-exec-hook still exists Mar 20 22:21:36.759: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 20 22:21:36.763: INFO: Pod pod-with-poststart-exec-hook still exists Mar 20 22:21:38.758: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 20 22:21:38.762: INFO: Pod pod-with-poststart-exec-hook still exists Mar 20 22:21:40.758: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 20 22:21:40.763: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:21:40.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8772" for this suite. • [SLOW TEST:18.166 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3763,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:21:40.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 20 22:21:40.842: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 20 22:21:49.891: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:21:49.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6776" for this suite. • [SLOW TEST:9.130 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3783,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:21:49.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:22:06.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1477" for this suite. • [SLOW TEST:16.252 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":236,"skipped":3783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:22:06.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-3b68f237-c741-4e3f-9bb0-0a0da7d3af7e STEP: Creating a pod to test consume configMaps Mar 20 22:22:06.454: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f0130c2-00b8-4b89-994b-fa37417b3de2" in namespace "configmap-8054" to be "success or failure" Mar 20 22:22:06.470: INFO: Pod "pod-configmaps-9f0130c2-00b8-4b89-994b-fa37417b3de2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.192232ms Mar 20 22:22:08.474: INFO: Pod "pod-configmaps-9f0130c2-00b8-4b89-994b-fa37417b3de2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019895481s Mar 20 22:22:10.478: INFO: Pod "pod-configmaps-9f0130c2-00b8-4b89-994b-fa37417b3de2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02431808s STEP: Saw pod success Mar 20 22:22:10.478: INFO: Pod "pod-configmaps-9f0130c2-00b8-4b89-994b-fa37417b3de2" satisfied condition "success or failure" Mar 20 22:22:10.481: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-9f0130c2-00b8-4b89-994b-fa37417b3de2 container configmap-volume-test: STEP: delete the pod Mar 20 22:22:10.544: INFO: Waiting for pod pod-configmaps-9f0130c2-00b8-4b89-994b-fa37417b3de2 to disappear Mar 20 22:22:10.547: INFO: Pod pod-configmaps-9f0130c2-00b8-4b89-994b-fa37417b3de2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:22:10.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8054" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3829,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:22:10.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-474e7eb5-a91f-4b8e-83d9-0e050e5a66ee STEP: Creating a pod to test consume configMaps Mar 20 22:22:10.615: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b9d0b859-bf6a-45c3-bc3e-a0f57d644207" in namespace "projected-4098" to be "success or failure" Mar 20 22:22:10.625: INFO: Pod "pod-projected-configmaps-b9d0b859-bf6a-45c3-bc3e-a0f57d644207": Phase="Pending", Reason="", readiness=false. Elapsed: 10.219162ms Mar 20 22:22:12.630: INFO: Pod "pod-projected-configmaps-b9d0b859-bf6a-45c3-bc3e-a0f57d644207": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014551949s Mar 20 22:22:14.634: INFO: Pod "pod-projected-configmaps-b9d0b859-bf6a-45c3-bc3e-a0f57d644207": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018608299s STEP: Saw pod success Mar 20 22:22:14.634: INFO: Pod "pod-projected-configmaps-b9d0b859-bf6a-45c3-bc3e-a0f57d644207" satisfied condition "success or failure" Mar 20 22:22:14.637: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-b9d0b859-bf6a-45c3-bc3e-a0f57d644207 container projected-configmap-volume-test: STEP: delete the pod Mar 20 22:22:14.675: INFO: Waiting for pod pod-projected-configmaps-b9d0b859-bf6a-45c3-bc3e-a0f57d644207 to disappear Mar 20 22:22:14.691: INFO: Pod pod-projected-configmaps-b9d0b859-bf6a-45c3-bc3e-a0f57d644207 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:22:14.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4098" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3877,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:22:14.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 22:22:14.789: INFO: Waiting up to 5m0s for pod "downwardapi-volume-234cb8c8-a5e2-4653-ab61-ba289e0faa43" in namespace "downward-api-5972" to be "success or failure" Mar 20 22:22:14.793: INFO: Pod "downwardapi-volume-234cb8c8-a5e2-4653-ab61-ba289e0faa43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282764ms Mar 20 22:22:16.797: INFO: Pod "downwardapi-volume-234cb8c8-a5e2-4653-ab61-ba289e0faa43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008639551s Mar 20 22:22:18.801: INFO: Pod "downwardapi-volume-234cb8c8-a5e2-4653-ab61-ba289e0faa43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012641534s STEP: Saw pod success Mar 20 22:22:18.801: INFO: Pod "downwardapi-volume-234cb8c8-a5e2-4653-ab61-ba289e0faa43" satisfied condition "success or failure" Mar 20 22:22:18.804: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-234cb8c8-a5e2-4653-ab61-ba289e0faa43 container client-container: STEP: delete the pod Mar 20 22:22:18.838: INFO: Waiting for pod downwardapi-volume-234cb8c8-a5e2-4653-ab61-ba289e0faa43 to disappear Mar 20 22:22:18.853: INFO: Pod downwardapi-volume-234cb8c8-a5e2-4653-ab61-ba289e0faa43 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:22:18.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5972" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3881,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:22:18.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 20 22:22:18.943: INFO: Waiting up to 5m0s for pod "pod-0f3e8bbf-0ec8-4f0e-bd80-03d31265a7e9" in namespace "emptydir-5051" to be "success or failure" Mar 20 22:22:18.948: INFO: Pod "pod-0f3e8bbf-0ec8-4f0e-bd80-03d31265a7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.368707ms Mar 20 22:22:20.953: INFO: Pod "pod-0f3e8bbf-0ec8-4f0e-bd80-03d31265a7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009934588s Mar 20 22:22:22.957: INFO: Pod "pod-0f3e8bbf-0ec8-4f0e-bd80-03d31265a7e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014220087s STEP: Saw pod success Mar 20 22:22:22.957: INFO: Pod "pod-0f3e8bbf-0ec8-4f0e-bd80-03d31265a7e9" satisfied condition "success or failure" Mar 20 22:22:22.961: INFO: Trying to get logs from node jerma-worker2 pod pod-0f3e8bbf-0ec8-4f0e-bd80-03d31265a7e9 container test-container: STEP: delete the pod Mar 20 22:22:22.980: INFO: Waiting for pod pod-0f3e8bbf-0ec8-4f0e-bd80-03d31265a7e9 to disappear Mar 20 22:22:22.984: INFO: Pod pod-0f3e8bbf-0ec8-4f0e-bd80-03d31265a7e9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:22:22.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5051" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3884,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:22:22.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 22:22:23.089: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0be6546-e7f3-4b33-b7d3-3c22d8442946" in namespace "projected-4431" to be "success or failure" Mar 20 22:22:23.093: INFO: Pod "downwardapi-volume-b0be6546-e7f3-4b33-b7d3-3c22d8442946": Phase="Pending", Reason="", readiness=false. Elapsed: 3.773068ms Mar 20 22:22:25.098: INFO: Pod "downwardapi-volume-b0be6546-e7f3-4b33-b7d3-3c22d8442946": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008376687s Mar 20 22:22:27.101: INFO: Pod "downwardapi-volume-b0be6546-e7f3-4b33-b7d3-3c22d8442946": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012106564s STEP: Saw pod success Mar 20 22:22:27.101: INFO: Pod "downwardapi-volume-b0be6546-e7f3-4b33-b7d3-3c22d8442946" satisfied condition "success or failure" Mar 20 22:22:27.104: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b0be6546-e7f3-4b33-b7d3-3c22d8442946 container client-container: STEP: delete the pod Mar 20 22:22:27.136: INFO: Waiting for pod downwardapi-volume-b0be6546-e7f3-4b33-b7d3-3c22d8442946 to disappear Mar 20 22:22:27.147: INFO: Pod downwardapi-volume-b0be6546-e7f3-4b33-b7d3-3c22d8442946 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:22:27.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4431" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3895,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:22:27.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 22:22:27.215: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:22:28.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6039" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":242,"skipped":3910,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:22:28.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 22:22:28.961: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 22:22:31.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339748, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339748, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339749, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339748, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 22:22:34.069: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:22:46.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-890" for this suite. STEP: Destroying namespace "webhook-890-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.050 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":243,"skipped":3924,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:22:46.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7531 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 20 22:22:46.359: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 20 22:23:12.478: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.91:8080/dial?request=hostname&protocol=udp&host=10.244.1.53&port=8081&tries=1'] Namespace:pod-network-test-7531 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:23:12.478: INFO: >>> kubeConfig: /root/.kube/config I0320 22:23:12.511939 7 log.go:172] (0xc002a188f0) (0xc00197c6e0) Create stream I0320 22:23:12.511971 7 log.go:172] (0xc002a188f0) (0xc00197c6e0) Stream added, broadcasting: 1 I0320 22:23:12.514084 7 log.go:172] (0xc002a188f0) Reply frame received for 1 I0320 22:23:12.514126 7 log.go:172] (0xc002a188f0) (0xc001092140) Create stream I0320 22:23:12.514138 7 log.go:172] (0xc002a188f0) (0xc001092140) Stream added, broadcasting: 3 I0320 22:23:12.515173 7 log.go:172] (0xc002a188f0) Reply frame received for 3 I0320 22:23:12.515210 7 log.go:172] (0xc002a188f0) (0xc001eb8500) Create stream I0320 22:23:12.515227 7 log.go:172] (0xc002a188f0) (0xc001eb8500) Stream added, broadcasting: 5 I0320 22:23:12.516248 7 log.go:172] (0xc002a188f0) Reply frame received for 5 I0320 22:23:12.602581 7 log.go:172] (0xc002a188f0) Data frame received for 3 I0320 22:23:12.602617 7 log.go:172] (0xc001092140) (3) Data frame handling I0320 22:23:12.602638 7 log.go:172] (0xc001092140) (3) Data frame sent I0320 22:23:12.603429 7 log.go:172] (0xc002a188f0) Data frame received for 5 I0320 22:23:12.603535 7 log.go:172] (0xc001eb8500) (5) Data frame handling I0320 22:23:12.603855 7 log.go:172] (0xc002a188f0) Data frame received for 3 I0320 22:23:12.603895 7 log.go:172] (0xc001092140) (3) Data frame handling I0320 22:23:12.605840 7 log.go:172] (0xc002a188f0) Data frame received for 1 I0320 22:23:12.605869 7 log.go:172] (0xc00197c6e0) (1) Data frame handling I0320 22:23:12.605887 7 log.go:172] (0xc00197c6e0) (1) Data frame sent I0320 22:23:12.605903 7 log.go:172] (0xc002a188f0) (0xc00197c6e0) Stream removed, broadcasting: 1 I0320 22:23:12.605987 7 log.go:172] (0xc002a188f0) Go away received I0320 22:23:12.606051 7 log.go:172] (0xc002a188f0) (0xc00197c6e0) Stream removed, broadcasting: 1 I0320 22:23:12.606087 7 log.go:172] (0xc002a188f0) (0xc001092140) Stream removed, broadcasting: 3 I0320 22:23:12.606110 7 log.go:172] (0xc002a188f0) (0xc001eb8500) Stream removed, broadcasting: 5 Mar 20 22:23:12.606: INFO: Waiting for responses: map[] Mar 20 22:23:12.609: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.91:8080/dial?request=hostname&protocol=udp&host=10.244.2.90&port=8081&tries=1'] Namespace:pod-network-test-7531 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:23:12.609: INFO: >>> kubeConfig: /root/.kube/config I0320 22:23:12.644089 7 log.go:172] (0xc0042786e0) (0xc000c70460) Create stream I0320 22:23:12.644123 7 log.go:172] (0xc0042786e0) (0xc000c70460) Stream added, broadcasting: 1 I0320 22:23:12.646162 7 log.go:172] (0xc0042786e0) Reply frame received for 1 I0320 22:23:12.646190 7 log.go:172] (0xc0042786e0) (0xc001092460) Create stream I0320 22:23:12.646200 7 log.go:172] (0xc0042786e0) (0xc001092460) Stream added, broadcasting: 3 I0320 22:23:12.647044 7 log.go:172] (0xc0042786e0) Reply frame received for 3 I0320 22:23:12.647074 7 log.go:172] (0xc0042786e0) (0xc00197cb40) Create stream I0320 22:23:12.647089 7 log.go:172] (0xc0042786e0) (0xc00197cb40) Stream added, broadcasting: 5 I0320 22:23:12.648131 7 log.go:172] (0xc0042786e0) Reply frame received for 5 I0320 22:23:12.719656 7 log.go:172] (0xc0042786e0) Data frame received for 3 I0320 22:23:12.719690 7 log.go:172] (0xc001092460) (3) Data frame handling I0320 22:23:12.719714 7 log.go:172] (0xc001092460) (3) Data frame sent I0320 22:23:12.720034 7 log.go:172] (0xc0042786e0) Data frame received for 3 I0320 22:23:12.720083 7 log.go:172] (0xc001092460) (3) Data frame handling I0320 22:23:12.720118 7 log.go:172] (0xc0042786e0) Data frame received for 5 I0320 22:23:12.720140 7 log.go:172] (0xc00197cb40) (5) Data frame handling I0320 22:23:12.722147 7 log.go:172] (0xc0042786e0) Data frame received for 1 I0320 22:23:12.722182 7 log.go:172] (0xc000c70460) (1) Data frame handling I0320 22:23:12.722227 7 log.go:172] (0xc000c70460) (1) Data frame sent I0320 22:23:12.722246 7 log.go:172] (0xc0042786e0) (0xc000c70460) Stream removed, broadcasting: 1 I0320 22:23:12.722329 7 log.go:172] (0xc0042786e0) (0xc000c70460) Stream removed, broadcasting: 1 I0320 22:23:12.722343 7 log.go:172] (0xc0042786e0) (0xc001092460) Stream removed, broadcasting: 3 I0320 22:23:12.722456 7 log.go:172] (0xc0042786e0) Go away received I0320 22:23:12.722515 7 log.go:172] (0xc0042786e0) (0xc00197cb40) Stream removed, broadcasting: 5 Mar 20 22:23:12.722: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:23:12.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7531" for this suite. • [SLOW TEST:26.425 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":3935,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:23:12.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 20 22:23:12.827: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6545" to be "success or failure" Mar 20 22:23:12.830: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.903084ms Mar 20 22:23:14.834: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006641493s Mar 20 22:23:16.837: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010501433s STEP: Saw pod success Mar 20 22:23:16.838: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 20 22:23:16.840: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 20 22:23:16.978: INFO: Waiting for pod pod-host-path-test to disappear Mar 20 22:23:16.992: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:23:16.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6545" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3947,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:23:16.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 20 22:23:21.207: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:23:21.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3043" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3977,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:23:21.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 20 22:23:21.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 20 22:23:21.539: INFO: stderr: "" Mar 20 22:23:21.539: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:23:21.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3305" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":247,"skipped":4000,"failed":0} ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:23:21.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 20 22:23:21.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6638' Mar 20 22:23:21.920: INFO: stderr: "" Mar 20 22:23:21.920: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 20 22:23:21.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6638' Mar 20 22:23:22.023: INFO: stderr: "" Mar 20 22:23:22.024: INFO: stdout: "update-demo-nautilus-c8gpm update-demo-nautilus-c8ht8 " Mar 20 22:23:22.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c8gpm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6638' Mar 20 22:23:22.105: INFO: stderr: "" Mar 20 22:23:22.105: INFO: stdout: "" Mar 20 22:23:22.105: INFO: update-demo-nautilus-c8gpm is created but not running Mar 20 22:23:27.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6638' Mar 20 22:23:27.221: INFO: stderr: "" Mar 20 22:23:27.221: INFO: stdout: "update-demo-nautilus-c8gpm update-demo-nautilus-c8ht8 " Mar 20 22:23:27.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c8gpm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6638' Mar 20 22:23:27.329: INFO: stderr: "" Mar 20 22:23:27.329: INFO: stdout: "true" Mar 20 22:23:27.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c8gpm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6638' Mar 20 22:23:27.426: INFO: stderr: "" Mar 20 22:23:27.426: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 20 22:23:27.426: INFO: validating pod update-demo-nautilus-c8gpm Mar 20 22:23:27.430: INFO: got data: { "image": "nautilus.jpg" } Mar 20 22:23:27.430: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 20 22:23:27.430: INFO: update-demo-nautilus-c8gpm is verified up and running Mar 20 22:23:27.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c8ht8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6638' Mar 20 22:23:27.527: INFO: stderr: "" Mar 20 22:23:27.527: INFO: stdout: "true" Mar 20 22:23:27.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c8ht8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6638' Mar 20 22:23:27.626: INFO: stderr: "" Mar 20 22:23:27.626: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 20 22:23:27.626: INFO: validating pod update-demo-nautilus-c8ht8 Mar 20 22:23:27.647: INFO: got data: { "image": "nautilus.jpg" } Mar 20 22:23:27.647: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 20 22:23:27.647: INFO: update-demo-nautilus-c8ht8 is verified up and running STEP: rolling-update to new replication controller Mar 20 22:23:27.651: INFO: scanned /root for discovery docs: Mar 20 22:23:27.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6638' Mar 20 22:23:50.272: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 20 22:23:50.272: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 20 22:23:50.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6638' Mar 20 22:23:50.376: INFO: stderr: "" Mar 20 22:23:50.376: INFO: stdout: "update-demo-kitten-gcckv update-demo-kitten-pcwx5 " Mar 20 22:23:50.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gcckv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6638' Mar 20 22:23:50.466: INFO: stderr: "" Mar 20 22:23:50.466: INFO: stdout: "true" Mar 20 22:23:50.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gcckv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6638' Mar 20 22:23:50.557: INFO: stderr: "" Mar 20 22:23:50.557: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 20 22:23:50.557: INFO: validating pod update-demo-kitten-gcckv Mar 20 22:23:50.561: INFO: got data: { "image": "kitten.jpg" } Mar 20 22:23:50.561: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 20 22:23:50.561: INFO: update-demo-kitten-gcckv is verified up and running Mar 20 22:23:50.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pcwx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6638' Mar 20 22:23:50.642: INFO: stderr: "" Mar 20 22:23:50.642: INFO: stdout: "true" Mar 20 22:23:50.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pcwx5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6638' Mar 20 22:23:50.738: INFO: stderr: "" Mar 20 22:23:50.738: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 20 22:23:50.738: INFO: validating pod update-demo-kitten-pcwx5 Mar 20 22:23:50.742: INFO: got data: { "image": "kitten.jpg" } Mar 20 22:23:50.742: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 20 22:23:50.742: INFO: update-demo-kitten-pcwx5 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:23:50.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6638" for this suite. • [SLOW TEST:29.201 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":248,"skipped":4000,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:23:50.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:24:07.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6272" for this suite. • [SLOW TEST:17.149 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":249,"skipped":4004,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:24:07.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 20 22:24:16.078: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 20 22:24:16.084: INFO: Pod pod-with-prestop-exec-hook still exists Mar 20 22:24:18.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 20 22:24:18.137: INFO: Pod pod-with-prestop-exec-hook still exists Mar 20 22:24:20.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 20 22:24:20.089: INFO: Pod pod-with-prestop-exec-hook still exists Mar 20 22:24:22.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 20 22:24:22.088: INFO: Pod pod-with-prestop-exec-hook still exists Mar 20 22:24:24.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 20 22:24:24.088: INFO: Pod pod-with-prestop-exec-hook still exists Mar 20 22:24:26.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 20 22:24:26.087: INFO: Pod pod-with-prestop-exec-hook still exists Mar 20 22:24:28.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 20 22:24:28.089: INFO: Pod pod-with-prestop-exec-hook still exists Mar 20 22:24:30.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 20 22:24:30.088: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:24:30.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4506" for this suite. • [SLOW TEST:22.218 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4031,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:24:30.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0320 22:24:31.276513 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 20 22:24:31.276: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:24:31.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1596" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":251,"skipped":4038,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:24:31.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 22:24:31.413: INFO: Waiting up to 5m0s for pod "downwardapi-volume-509e16e2-d9c5-4b24-82f9-29e56a4f65e1" in namespace "downward-api-2930" to be "success or failure" Mar 20 22:24:31.419: INFO: Pod "downwardapi-volume-509e16e2-d9c5-4b24-82f9-29e56a4f65e1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.398817ms Mar 20 22:24:33.423: INFO: Pod "downwardapi-volume-509e16e2-d9c5-4b24-82f9-29e56a4f65e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009898528s Mar 20 22:24:35.427: INFO: Pod "downwardapi-volume-509e16e2-d9c5-4b24-82f9-29e56a4f65e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013603142s STEP: Saw pod success Mar 20 22:24:35.427: INFO: Pod "downwardapi-volume-509e16e2-d9c5-4b24-82f9-29e56a4f65e1" satisfied condition "success or failure" Mar 20 22:24:35.430: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-509e16e2-d9c5-4b24-82f9-29e56a4f65e1 container client-container: STEP: delete the pod Mar 20 22:24:35.550: INFO: Waiting for pod downwardapi-volume-509e16e2-d9c5-4b24-82f9-29e56a4f65e1 to disappear Mar 20 22:24:35.569: INFO: Pod downwardapi-volume-509e16e2-d9c5-4b24-82f9-29e56a4f65e1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:24:35.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2930" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4076,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:24:35.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:24:46.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4897" for this suite. • [SLOW TEST:11.241 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":253,"skipped":4079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:24:46.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 22:24:47.270: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 22:24:49.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339887, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339887, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339887, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339887, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 22:24:52.319: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:24:52.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9133" for this suite. STEP: Destroying namespace "webhook-9133-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.071 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":254,"skipped":4118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:24:52.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 20 22:24:53.679: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 20 22:24:55.688: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339893, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339893, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339893, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720339893, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 20 22:24:58.724: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:24:58.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5033" for this suite. STEP: Destroying namespace "webhook-5033-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.101 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":255,"skipped":4159,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:24:58.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 20 22:25:02.165: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:25:02.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6348" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:25:02.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 22:25:02.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-263107f3-05ef-4ef7-97ca-fa98761a2cfd" in namespace "projected-2647" to be "success or failure" Mar 20 22:25:02.332: INFO: Pod "downwardapi-volume-263107f3-05ef-4ef7-97ca-fa98761a2cfd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.987272ms Mar 20 22:25:04.335: INFO: Pod "downwardapi-volume-263107f3-05ef-4ef7-97ca-fa98761a2cfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007233416s Mar 20 22:25:06.344: INFO: Pod "downwardapi-volume-263107f3-05ef-4ef7-97ca-fa98761a2cfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016394632s STEP: Saw pod success Mar 20 22:25:06.344: INFO: Pod "downwardapi-volume-263107f3-05ef-4ef7-97ca-fa98761a2cfd" satisfied condition "success or failure" Mar 20 22:25:06.350: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-263107f3-05ef-4ef7-97ca-fa98761a2cfd container client-container: STEP: delete the pod Mar 20 22:25:06.380: INFO: Waiting for pod downwardapi-volume-263107f3-05ef-4ef7-97ca-fa98761a2cfd to disappear Mar 20 22:25:06.425: INFO: Pod downwardapi-volume-263107f3-05ef-4ef7-97ca-fa98761a2cfd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:25:06.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2647" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4208,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:25:06.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1788 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 20 22:25:06.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-900' Mar 20 22:25:06.586: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 20 22:25:06.586: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1793 Mar 20 22:25:06.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-900' Mar 20 22:25:06.722: INFO: stderr: "" Mar 20 22:25:06.722: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:25:06.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-900" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":258,"skipped":4216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:25:06.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 20 22:25:06.786: INFO: Created pod &Pod{ObjectMeta:{dns-8142 dns-8142 /api/v1/namespaces/dns-8142/pods/dns-8142 a1b6e6b2-252f-43a3-b276-aa51bc76100b 1400584 0 2020-03-20 22:25:06 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vq7q8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vq7q8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vq7q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 20 22:25:10.802: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8142 PodName:dns-8142 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:25:10.802: INFO: >>> kubeConfig: /root/.kube/config I0320 22:25:10.839512 7 log.go:172] (0xc001b58c60) (0xc00197dc20) Create stream I0320 22:25:10.839543 7 log.go:172] (0xc001b58c60) (0xc00197dc20) Stream added, broadcasting: 1 I0320 22:25:10.841353 7 log.go:172] (0xc001b58c60) Reply frame received for 1 I0320 22:25:10.841408 7 log.go:172] (0xc001b58c60) (0xc0010926e0) Create stream I0320 22:25:10.841427 7 log.go:172] (0xc001b58c60) (0xc0010926e0) Stream added, broadcasting: 3 I0320 22:25:10.842477 7 log.go:172] (0xc001b58c60) Reply frame received for 3 I0320 22:25:10.842513 7 log.go:172] (0xc001b58c60) (0xc00197dd60) Create stream I0320 22:25:10.842525 7 log.go:172] (0xc001b58c60) (0xc00197dd60) Stream added, broadcasting: 5 I0320 22:25:10.843730 7 log.go:172] (0xc001b58c60) Reply frame received for 5 I0320 22:25:10.944280 7 log.go:172] (0xc001b58c60) Data frame received for 3 I0320 22:25:10.944313 7 log.go:172] (0xc0010926e0) (3) Data frame handling I0320 22:25:10.944333 7 log.go:172] (0xc0010926e0) (3) Data frame sent I0320 22:25:10.944842 7 log.go:172] (0xc001b58c60) Data frame received for 3 I0320 22:25:10.944858 7 log.go:172] (0xc0010926e0) (3) Data frame handling I0320 22:25:10.945068 7 log.go:172] (0xc001b58c60) Data frame received for 5 I0320 22:25:10.945106 7 log.go:172] (0xc00197dd60) (5) Data frame handling I0320 22:25:10.946939 7 log.go:172] (0xc001b58c60) Data frame received for 1 I0320 22:25:10.947022 7 log.go:172] (0xc00197dc20) (1) Data frame handling I0320 22:25:10.947066 7 log.go:172] (0xc00197dc20) (1) Data frame sent I0320 22:25:10.947108 7 log.go:172] (0xc001b58c60) (0xc00197dc20) Stream removed, broadcasting: 1 I0320 22:25:10.947152 7 log.go:172] (0xc001b58c60) Go away received I0320 22:25:10.947227 7 log.go:172] (0xc001b58c60) (0xc00197dc20) Stream removed, broadcasting: 1 I0320 22:25:10.947242 7 log.go:172] (0xc001b58c60) (0xc0010926e0) Stream removed, broadcasting: 3 I0320 22:25:10.947248 7 log.go:172] (0xc001b58c60) (0xc00197dd60) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 20 22:25:10.947: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8142 PodName:dns-8142 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:25:10.947: INFO: >>> kubeConfig: /root/.kube/config I0320 22:25:10.978621 7 log.go:172] (0xc0042789a0) (0xc001eb8dc0) Create stream I0320 22:25:10.978645 7 log.go:172] (0xc0042789a0) (0xc001eb8dc0) Stream added, broadcasting: 1 I0320 22:25:10.982393 7 log.go:172] (0xc0042789a0) Reply frame received for 1 I0320 22:25:10.982471 7 log.go:172] (0xc0042789a0) (0xc00197dea0) Create stream I0320 22:25:10.982512 7 log.go:172] (0xc0042789a0) (0xc00197dea0) Stream added, broadcasting: 3 I0320 22:25:10.984111 7 log.go:172] (0xc0042789a0) Reply frame received for 3 I0320 22:25:10.984183 7 log.go:172] (0xc0042789a0) (0xc001f9d860) Create stream I0320 22:25:10.984202 7 log.go:172] (0xc0042789a0) (0xc001f9d860) Stream added, broadcasting: 5 I0320 22:25:10.986098 7 log.go:172] (0xc0042789a0) Reply frame received for 5 I0320 22:25:11.052690 7 log.go:172] (0xc0042789a0) Data frame received for 3 I0320 22:25:11.052724 7 log.go:172] (0xc00197dea0) (3) Data frame handling I0320 22:25:11.052745 7 log.go:172] (0xc00197dea0) (3) Data frame sent I0320 22:25:11.053740 7 log.go:172] (0xc0042789a0) Data frame received for 3 I0320 22:25:11.053778 7 log.go:172] (0xc00197dea0) (3) Data frame handling I0320 22:25:11.053801 7 log.go:172] (0xc0042789a0) Data frame received for 5 I0320 22:25:11.053814 7 log.go:172] (0xc001f9d860) (5) Data frame handling I0320 22:25:11.055547 7 log.go:172] (0xc0042789a0) Data frame received for 1 I0320 22:25:11.055567 7 log.go:172] (0xc001eb8dc0) (1) Data frame handling I0320 22:25:11.055577 7 log.go:172] (0xc001eb8dc0) (1) Data frame sent I0320 22:25:11.055588 7 log.go:172] (0xc0042789a0) (0xc001eb8dc0) Stream removed, broadcasting: 1 I0320 22:25:11.055630 7 log.go:172] (0xc0042789a0) Go away received I0320 22:25:11.055686 7 log.go:172] (0xc0042789a0) (0xc001eb8dc0) Stream removed, broadcasting: 1 I0320 22:25:11.055722 7 log.go:172] (0xc0042789a0) (0xc00197dea0) Stream removed, broadcasting: 3 I0320 22:25:11.055735 7 log.go:172] (0xc0042789a0) (0xc001f9d860) Stream removed, broadcasting: 5 Mar 20 22:25:11.055: INFO: Deleting pod dns-8142... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:25:11.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8142" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":259,"skipped":4255,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:25:11.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-4669ba30-c87b-42cb-b726-57e94f91521b STEP: Creating a pod to test consume secrets Mar 20 22:25:11.361: INFO: Waiting up to 5m0s for pod "pod-secrets-e93dbdd9-f7b0-40fb-9f76-5ebd5d024744" in namespace "secrets-8270" to be "success or failure" Mar 20 22:25:11.491: INFO: Pod "pod-secrets-e93dbdd9-f7b0-40fb-9f76-5ebd5d024744": Phase="Pending", Reason="", readiness=false. Elapsed: 129.481749ms Mar 20 22:25:13.521: INFO: Pod "pod-secrets-e93dbdd9-f7b0-40fb-9f76-5ebd5d024744": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159666665s Mar 20 22:25:15.525: INFO: Pod "pod-secrets-e93dbdd9-f7b0-40fb-9f76-5ebd5d024744": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16400603s STEP: Saw pod success Mar 20 22:25:15.525: INFO: Pod "pod-secrets-e93dbdd9-f7b0-40fb-9f76-5ebd5d024744" satisfied condition "success or failure" Mar 20 22:25:15.528: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e93dbdd9-f7b0-40fb-9f76-5ebd5d024744 container secret-volume-test: STEP: delete the pod Mar 20 22:25:15.594: INFO: Waiting for pod pod-secrets-e93dbdd9-f7b0-40fb-9f76-5ebd5d024744 to disappear Mar 20 22:25:15.601: INFO: Pod pod-secrets-e93dbdd9-f7b0-40fb-9f76-5ebd5d024744 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:25:15.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8270" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4256,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:25:15.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 20 22:25:16.222: INFO: Pod name wrapped-volume-race-64498bb2-93fb-4033-8e5d-ed3970f75cb5: Found 0 pods out of 5 Mar 20 22:25:21.228: INFO: Pod name wrapped-volume-race-64498bb2-93fb-4033-8e5d-ed3970f75cb5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-64498bb2-93fb-4033-8e5d-ed3970f75cb5 in namespace emptydir-wrapper-8977, will wait for the garbage collector to delete the pods Mar 20 22:25:35.325: INFO: Deleting ReplicationController wrapped-volume-race-64498bb2-93fb-4033-8e5d-ed3970f75cb5 took: 7.071324ms Mar 20 22:25:35.726: INFO: Terminating ReplicationController wrapped-volume-race-64498bb2-93fb-4033-8e5d-ed3970f75cb5 pods took: 400.306112ms STEP: Creating RC which spawns configmap-volume pods Mar 20 22:25:50.554: INFO: Pod name wrapped-volume-race-4fc31675-853a-4323-b5c4-b6d66ae52c79: Found 0 pods out of 5 Mar 20 22:25:55.562: INFO: Pod name wrapped-volume-race-4fc31675-853a-4323-b5c4-b6d66ae52c79: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4fc31675-853a-4323-b5c4-b6d66ae52c79 in namespace emptydir-wrapper-8977, will wait for the garbage collector to delete the pods Mar 20 22:26:09.671: INFO: Deleting ReplicationController wrapped-volume-race-4fc31675-853a-4323-b5c4-b6d66ae52c79 took: 27.99652ms Mar 20 22:26:09.971: INFO: Terminating ReplicationController wrapped-volume-race-4fc31675-853a-4323-b5c4-b6d66ae52c79 pods took: 300.237578ms STEP: Creating RC which spawns configmap-volume pods Mar 20 22:26:20.605: INFO: Pod name wrapped-volume-race-3870d5a8-e31f-4d49-9777-300dc1c28558: Found 0 pods out of 5 Mar 20 22:26:25.612: INFO: Pod name wrapped-volume-race-3870d5a8-e31f-4d49-9777-300dc1c28558: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3870d5a8-e31f-4d49-9777-300dc1c28558 in namespace emptydir-wrapper-8977, will wait for the garbage collector to delete the pods Mar 20 22:26:39.711: INFO: Deleting ReplicationController wrapped-volume-race-3870d5a8-e31f-4d49-9777-300dc1c28558 took: 23.939236ms Mar 20 22:26:40.111: INFO: Terminating ReplicationController wrapped-volume-race-3870d5a8-e31f-4d49-9777-300dc1c28558 pods took: 400.298689ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:26:51.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8977" for this suite. • [SLOW TEST:95.474 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":261,"skipped":4258,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:26:51.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 20 22:26:55.716: INFO: Successfully updated pod "annotationupdate8a642826-8c42-48ad-a94e-a77aa8de33de" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:26:57.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7006" for this suite. • [SLOW TEST:6.729 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4274,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:26:57.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9940.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9940.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9940.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9940.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9940.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9940.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 20 22:27:04.039: INFO: DNS probes using dns-9940/dns-test-a693575b-69ec-4c2d-b11a-3e6f6e0fafbc succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:27:04.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9940" for this suite. • [SLOW TEST:6.413 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":263,"skipped":4277,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:27:04.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:27:08.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9022" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4285,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:27:08.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 22:27:08.760: INFO: Waiting up to 5m0s for pod "downwardapi-volume-deb52c51-86b7-4748-8ac6-74b6c16e1119" in namespace "downward-api-2896" to be "success or failure" Mar 20 22:27:08.776: INFO: Pod "downwardapi-volume-deb52c51-86b7-4748-8ac6-74b6c16e1119": Phase="Pending", Reason="", readiness=false. Elapsed: 15.498744ms Mar 20 22:27:10.780: INFO: Pod "downwardapi-volume-deb52c51-86b7-4748-8ac6-74b6c16e1119": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019351224s Mar 20 22:27:12.784: INFO: Pod "downwardapi-volume-deb52c51-86b7-4748-8ac6-74b6c16e1119": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023500785s STEP: Saw pod success Mar 20 22:27:12.784: INFO: Pod "downwardapi-volume-deb52c51-86b7-4748-8ac6-74b6c16e1119" satisfied condition "success or failure" Mar 20 22:27:12.787: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-deb52c51-86b7-4748-8ac6-74b6c16e1119 container client-container: STEP: delete the pod Mar 20 22:27:12.806: INFO: Waiting for pod downwardapi-volume-deb52c51-86b7-4748-8ac6-74b6c16e1119 to disappear Mar 20 22:27:12.810: INFO: Pod downwardapi-volume-deb52c51-86b7-4748-8ac6-74b6c16e1119 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:27:12.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2896" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4288,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:27:12.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-270c720a-d07b-4edf-9ba1-4b10c93e8758 Mar 20 22:27:12.908: INFO: Pod name my-hostname-basic-270c720a-d07b-4edf-9ba1-4b10c93e8758: Found 0 pods out of 1 Mar 20 22:27:17.913: INFO: Pod name my-hostname-basic-270c720a-d07b-4edf-9ba1-4b10c93e8758: Found 1 pods out of 1 Mar 20 22:27:17.913: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-270c720a-d07b-4edf-9ba1-4b10c93e8758" are running Mar 20 22:27:17.915: INFO: Pod "my-hostname-basic-270c720a-d07b-4edf-9ba1-4b10c93e8758-pltxv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-20 22:27:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-20 22:27:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-20 22:27:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-20 22:27:12 +0000 UTC Reason: Message:}]) Mar 20 22:27:17.915: INFO: Trying to dial the pod Mar 20 22:27:22.926: INFO: Controller my-hostname-basic-270c720a-d07b-4edf-9ba1-4b10c93e8758: Got expected result from replica 1 [my-hostname-basic-270c720a-d07b-4edf-9ba1-4b10c93e8758-pltxv]: "my-hostname-basic-270c720a-d07b-4edf-9ba1-4b10c93e8758-pltxv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:27:22.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2126" for this suite. • [SLOW TEST:10.121 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":266,"skipped":4305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:27:22.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 20 22:27:23.040: INFO: Waiting up to 5m0s for pod "downward-api-a7598de0-0317-4a80-9ec7-d0cbb55e4d76" in namespace "downward-api-2868" to be "success or failure" Mar 20 22:27:23.061: INFO: Pod "downward-api-a7598de0-0317-4a80-9ec7-d0cbb55e4d76": Phase="Pending", Reason="", readiness=false. Elapsed: 21.543106ms Mar 20 22:27:25.066: INFO: Pod "downward-api-a7598de0-0317-4a80-9ec7-d0cbb55e4d76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025893548s Mar 20 22:27:27.069: INFO: Pod "downward-api-a7598de0-0317-4a80-9ec7-d0cbb55e4d76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029246581s STEP: Saw pod success Mar 20 22:27:27.069: INFO: Pod "downward-api-a7598de0-0317-4a80-9ec7-d0cbb55e4d76" satisfied condition "success or failure" Mar 20 22:27:27.071: INFO: Trying to get logs from node jerma-worker2 pod downward-api-a7598de0-0317-4a80-9ec7-d0cbb55e4d76 container dapi-container: STEP: delete the pod Mar 20 22:27:27.090: INFO: Waiting for pod downward-api-a7598de0-0317-4a80-9ec7-d0cbb55e4d76 to disappear Mar 20 22:27:27.100: INFO: Pod downward-api-a7598de0-0317-4a80-9ec7-d0cbb55e4d76 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:27:27.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2868" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4418,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:27:27.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-9853/configmap-test-8e5920a8-f071-4f56-81d2-5e4515022ec7 STEP: Creating a pod to test consume configMaps Mar 20 22:27:27.192: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b89f9ca-a221-4275-bf27-5cf842264393" in namespace "configmap-9853" to be "success or failure" Mar 20 22:27:27.219: INFO: Pod "pod-configmaps-9b89f9ca-a221-4275-bf27-5cf842264393": Phase="Pending", Reason="", readiness=false. Elapsed: 26.913827ms Mar 20 22:27:29.223: INFO: Pod "pod-configmaps-9b89f9ca-a221-4275-bf27-5cf842264393": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030424873s Mar 20 22:27:31.227: INFO: Pod "pod-configmaps-9b89f9ca-a221-4275-bf27-5cf842264393": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034440392s STEP: Saw pod success Mar 20 22:27:31.227: INFO: Pod "pod-configmaps-9b89f9ca-a221-4275-bf27-5cf842264393" satisfied condition "success or failure" Mar 20 22:27:31.230: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-9b89f9ca-a221-4275-bf27-5cf842264393 container env-test: STEP: delete the pod Mar 20 22:27:31.261: INFO: Waiting for pod pod-configmaps-9b89f9ca-a221-4275-bf27-5cf842264393 to disappear Mar 20 22:27:31.271: INFO: Pod pod-configmaps-9b89f9ca-a221-4275-bf27-5cf842264393 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:27:31.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9853" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4439,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:27:31.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 20 22:27:31.333: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:27:37.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9205" for this suite. • [SLOW TEST:5.843 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":269,"skipped":4440,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:27:37.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:27:37.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7938" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4442,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:27:37.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 20 22:27:37.519: INFO: Waiting up to 5m0s for pod "downwardapi-volume-093b3bad-da50-465a-bfe4-e3d61076c535" in namespace "downward-api-2406" to be "success or failure" Mar 20 22:27:37.524: INFO: Pod "downwardapi-volume-093b3bad-da50-465a-bfe4-e3d61076c535": Phase="Pending", Reason="", readiness=false. Elapsed: 3.985546ms Mar 20 22:27:39.529: INFO: Pod "downwardapi-volume-093b3bad-da50-465a-bfe4-e3d61076c535": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009338435s Mar 20 22:27:41.541: INFO: Pod "downwardapi-volume-093b3bad-da50-465a-bfe4-e3d61076c535": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021304969s STEP: Saw pod success Mar 20 22:27:41.541: INFO: Pod "downwardapi-volume-093b3bad-da50-465a-bfe4-e3d61076c535" satisfied condition "success or failure" Mar 20 22:27:41.544: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-093b3bad-da50-465a-bfe4-e3d61076c535 container client-container: STEP: delete the pod Mar 20 22:27:41.563: INFO: Waiting for pod downwardapi-volume-093b3bad-da50-465a-bfe4-e3d61076c535 to disappear Mar 20 22:27:41.568: INFO: Pod downwardapi-volume-093b3bad-da50-465a-bfe4-e3d61076c535 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:27:41.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2406" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4455,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:27:41.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-9203f809-dcee-4dac-a3da-bbf6acedc7bf STEP: Creating a pod to test consume secrets Mar 20 22:27:41.668: INFO: Waiting up to 5m0s for pod "pod-secrets-89d5b95b-dc6f-4e0b-8f59-33d7dcad4523" in namespace "secrets-1559" to be "success or failure" Mar 20 22:27:41.676: INFO: Pod "pod-secrets-89d5b95b-dc6f-4e0b-8f59-33d7dcad4523": Phase="Pending", Reason="", readiness=false. Elapsed: 7.40651ms Mar 20 22:27:43.679: INFO: Pod "pod-secrets-89d5b95b-dc6f-4e0b-8f59-33d7dcad4523": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010675282s Mar 20 22:27:45.683: INFO: Pod "pod-secrets-89d5b95b-dc6f-4e0b-8f59-33d7dcad4523": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01473325s STEP: Saw pod success Mar 20 22:27:45.683: INFO: Pod "pod-secrets-89d5b95b-dc6f-4e0b-8f59-33d7dcad4523" satisfied condition "success or failure" Mar 20 22:27:45.686: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-89d5b95b-dc6f-4e0b-8f59-33d7dcad4523 container secret-volume-test: STEP: delete the pod Mar 20 22:27:45.752: INFO: Waiting for pod pod-secrets-89d5b95b-dc6f-4e0b-8f59-33d7dcad4523 to disappear Mar 20 22:27:45.760: INFO: Pod pod-secrets-89d5b95b-dc6f-4e0b-8f59-33d7dcad4523 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:27:45.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1559" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4472,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:27:45.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1464 STEP: creating an pod Mar 20 22:27:45.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-5403 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 20 22:27:49.060: INFO: stderr: "" Mar 20 22:27:49.060: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 20 22:27:49.060: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 20 22:27:49.060: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5403" to be "running and ready, or succeeded" Mar 20 22:27:49.062: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35009ms Mar 20 22:27:51.117: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057119543s Mar 20 22:27:53.125: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.065354951s Mar 20 22:27:53.125: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 20 22:27:53.125: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 20 22:27:53.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5403' Mar 20 22:27:53.241: INFO: stderr: "" Mar 20 22:27:53.241: INFO: stdout: "I0320 22:27:51.459813 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/kgq 222\nI0320 22:27:51.659939 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/b7h 529\nI0320 22:27:51.860110 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/nmm4 228\nI0320 22:27:52.060047 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/jxsc 509\nI0320 22:27:52.259981 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/54n4 222\nI0320 22:27:52.460064 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/lwf 325\nI0320 22:27:52.660035 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/mjk 287\nI0320 22:27:52.860089 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/9t2 322\nI0320 22:27:53.060026 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/42xn 598\n" STEP: limiting log lines Mar 20 22:27:53.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5403 --tail=1' Mar 20 22:27:53.357: INFO: stderr: "" Mar 20 22:27:53.357: INFO: stdout: "I0320 22:27:53.260002 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/4qv6 507\n" Mar 20 22:27:53.357: INFO: got output "I0320 22:27:53.260002 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/4qv6 507\n" STEP: limiting log bytes Mar 20 22:27:53.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5403 --limit-bytes=1' Mar 20 22:27:53.463: INFO: stderr: "" Mar 20 22:27:53.463: INFO: stdout: "I" Mar 20 22:27:53.463: INFO: got output "I" STEP: exposing timestamps Mar 20 22:27:53.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5403 --tail=1 --timestamps' Mar 20 22:27:53.561: INFO: stderr: "" Mar 20 22:27:53.561: INFO: stdout: "2020-03-20T22:27:53.460128998Z I0320 22:27:53.459993 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/gmnf 418\n" Mar 20 22:27:53.561: INFO: got output "2020-03-20T22:27:53.460128998Z I0320 22:27:53.459993 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/gmnf 418\n" STEP: restricting to a time range Mar 20 22:27:56.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5403 --since=1s' Mar 20 22:27:56.167: INFO: stderr: "" Mar 20 22:27:56.167: INFO: stdout: "I0320 22:27:55.260008 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/srrm 202\nI0320 22:27:55.460062 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/fdqj 245\nI0320 22:27:55.660063 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/mklq 379\nI0320 22:27:55.860040 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/b2j 528\nI0320 22:27:56.060028 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/4zhh 488\n" Mar 20 22:27:56.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5403 --since=24h' Mar 20 22:27:56.282: INFO: stderr: "" Mar 20 22:27:56.282: INFO: stdout: "I0320 22:27:51.459813 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/kgq 222\nI0320 22:27:51.659939 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/b7h 529\nI0320 22:27:51.860110 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/nmm4 228\nI0320 22:27:52.060047 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/jxsc 509\nI0320 22:27:52.259981 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/54n4 222\nI0320 22:27:52.460064 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/lwf 325\nI0320 22:27:52.660035 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/mjk 287\nI0320 22:27:52.860089 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/9t2 322\nI0320 22:27:53.060026 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/42xn 598\nI0320 22:27:53.260002 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/4qv6 507\nI0320 22:27:53.459993 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/gmnf 418\nI0320 22:27:53.660065 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/m5s 342\nI0320 22:27:53.860026 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/76hb 456\nI0320 22:27:54.060018 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/qrv 528\nI0320 22:27:54.260086 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/hlzs 267\nI0320 22:27:54.459982 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/4m4 400\nI0320 22:27:54.660088 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/b2s 395\nI0320 22:27:54.859963 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/vxl5 576\nI0320 22:27:55.059947 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/xq4w 390\nI0320 22:27:55.260008 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/srrm 202\nI0320 22:27:55.460062 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/fdqj 245\nI0320 22:27:55.660063 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/mklq 379\nI0320 22:27:55.860040 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/b2j 528\nI0320 22:27:56.060028 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/4zhh 488\nI0320 22:27:56.259994 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/rgx 387\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470 Mar 20 22:27:56.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5403' Mar 20 22:28:09.225: INFO: stderr: "" Mar 20 22:28:09.225: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:28:09.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5403" for this suite. • [SLOW TEST:23.463 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":273,"skipped":4493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:28:09.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 20 22:28:09.274: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 20 22:28:09.291: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 20 22:28:14.294: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 20 22:28:14.294: INFO: Creating deployment "test-rolling-update-deployment" Mar 20 22:28:14.297: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 20 22:28:14.326: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 20 22:28:16.333: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 20 22:28:16.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720340094, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720340094, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720340094, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720340094, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 20 22:28:18.340: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 20 22:28:18.350: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2024 /apis/apps/v1/namespaces/deployment-2024/deployments/test-rolling-update-deployment dde1fd64-0732-4ad2-bea2-c80d897fb70d 1402429 1 2020-03-20 22:28:14 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0056a4d38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-20 22:28:14 +0000 UTC,LastTransitionTime:2020-03-20 22:28:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-20 22:28:17 +0000 UTC,LastTransitionTime:2020-03-20 22:28:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 20 22:28:18.353: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-2024 /apis/apps/v1/namespaces/deployment-2024/replicasets/test-rolling-update-deployment-67cf4f6444 cce97562-1211-4de9-85a1-a4af81a43d35 1402418 1 2020-03-20 22:28:14 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment dde1fd64-0732-4ad2-bea2-c80d897fb70d 0xc0056a51e7 0xc0056a51e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0056a5258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 20 22:28:18.353: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 20 22:28:18.353: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2024 /apis/apps/v1/namespaces/deployment-2024/replicasets/test-rolling-update-controller c9efdc02-a8f1-4df5-a6c4-c8bacace1971 1402428 2 2020-03-20 22:28:09 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment dde1fd64-0732-4ad2-bea2-c80d897fb70d 0xc0056a5117 0xc0056a5118}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0056a5178 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 20 22:28:18.357: INFO: Pod "test-rolling-update-deployment-67cf4f6444-grtcb" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-grtcb test-rolling-update-deployment-67cf4f6444- deployment-2024 /api/v1/namespaces/deployment-2024/pods/test-rolling-update-deployment-67cf4f6444-grtcb e81297a1-5175-4fc6-adf9-309005f76016 1402417 0 2020-03-20 22:28:14 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 cce97562-1211-4de9-85a1-a4af81a43d35 0xc0056a56c7 0xc0056a56c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c549r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c549r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c549r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 22:28:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 22:28:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 22:28:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-20 22:28:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.70,StartTime:2020-03-20 22:28:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-20 22:28:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://2c2bc0f7fb48d8bcb10faebf06eff0110afb8ab27415c75e595074d1ac025973,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:28:18.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2024" for this suite. • [SLOW TEST:9.132 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":274,"skipped":4516,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:28:18.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 STEP: creating the pod Mar 20 22:28:18.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6211' Mar 20 22:28:18.680: INFO: stderr: "" Mar 20 22:28:18.680: INFO: stdout: "pod/pause created\n" Mar 20 22:28:18.680: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 20 22:28:18.680: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6211" to be "running and ready" Mar 20 22:28:18.701: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 21.201425ms Mar 20 22:28:20.705: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025272126s Mar 20 22:28:22.708: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.028539661s Mar 20 22:28:22.708: INFO: Pod "pause" satisfied condition "running and ready" Mar 20 22:28:22.708: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 20 22:28:22.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6211' Mar 20 22:28:22.807: INFO: stderr: "" Mar 20 22:28:22.807: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 20 22:28:22.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6211' Mar 20 22:28:22.912: INFO: stderr: "" Mar 20 22:28:22.912: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 20 22:28:22.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6211' Mar 20 22:28:23.004: INFO: stderr: "" Mar 20 22:28:23.004: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 20 22:28:23.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6211' Mar 20 22:28:23.090: INFO: stderr: "" Mar 20 22:28:23.090: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 STEP: using delete to clean up resources Mar 20 22:28:23.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6211' Mar 20 22:28:23.235: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 20 22:28:23.236: INFO: stdout: "pod \"pause\" force deleted\n" Mar 20 22:28:23.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6211' Mar 20 22:28:23.331: INFO: stderr: "No resources found in kubectl-6211 namespace.\n" Mar 20 22:28:23.331: INFO: stdout: "" Mar 20 22:28:23.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6211 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 20 22:28:23.421: INFO: stderr: "" Mar 20 22:28:23.421: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:28:23.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6211" for this suite. • [SLOW TEST:5.066 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1379 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":275,"skipped":4520,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:28:23.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 20 22:28:33.719: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1530 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:28:33.719: INFO: >>> kubeConfig: /root/.kube/config I0320 22:28:33.752164 7 log.go:172] (0xc0042784d0) (0xc001a52e60) Create stream I0320 22:28:33.752243 7 log.go:172] (0xc0042784d0) (0xc001a52e60) Stream added, broadcasting: 1 I0320 22:28:33.754409 7 log.go:172] (0xc0042784d0) Reply frame received for 1 I0320 22:28:33.754442 7 log.go:172] (0xc0042784d0) (0xc000d23040) Create stream I0320 22:28:33.754455 7 log.go:172] (0xc0042784d0) (0xc000d23040) Stream added, broadcasting: 3 I0320 22:28:33.755238 7 log.go:172] (0xc0042784d0) Reply frame received for 3 I0320 22:28:33.755265 7 log.go:172] (0xc0042784d0) (0xc000d59d60) Create stream I0320 22:28:33.755274 7 log.go:172] (0xc0042784d0) (0xc000d59d60) Stream added, broadcasting: 5 I0320 22:28:33.755959 7 log.go:172] (0xc0042784d0) Reply frame received for 5 I0320 22:28:33.838699 7 log.go:172] (0xc0042784d0) Data frame received for 5 I0320 22:28:33.838751 7 log.go:172] (0xc000d59d60) (5) Data frame handling I0320 22:28:33.838784 7 log.go:172] (0xc0042784d0) Data frame received for 3 I0320 22:28:33.838798 7 log.go:172] (0xc000d23040) (3) Data frame handling I0320 22:28:33.838828 7 log.go:172] (0xc000d23040) (3) Data frame sent I0320 22:28:33.838843 7 log.go:172] (0xc0042784d0) Data frame received for 3 I0320 22:28:33.838858 7 log.go:172] (0xc000d23040) (3) Data frame handling I0320 22:28:33.840518 7 log.go:172] (0xc0042784d0) Data frame received for 1 I0320 22:28:33.840546 7 log.go:172] (0xc001a52e60) (1) Data frame handling I0320 22:28:33.840566 7 log.go:172] (0xc001a52e60) (1) Data frame sent I0320 22:28:33.840642 7 log.go:172] (0xc0042784d0) (0xc001a52e60) Stream removed, broadcasting: 1 I0320 22:28:33.840748 7 log.go:172] (0xc0042784d0) (0xc001a52e60) Stream removed, broadcasting: 1 I0320 22:28:33.840777 7 log.go:172] (0xc0042784d0) (0xc000d23040) Stream removed, broadcasting: 3 I0320 22:28:33.840809 7 log.go:172] (0xc0042784d0) (0xc000d59d60) Stream removed, broadcasting: 5 Mar 20 22:28:33.840: INFO: Exec stderr: "" Mar 20 22:28:33.840: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1530 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:28:33.840: INFO: >>> kubeConfig: /root/.kube/config I0320 22:28:33.840891 7 log.go:172] (0xc0042784d0) Go away received I0320 22:28:33.872750 7 log.go:172] (0xc001b582c0) (0xc000d23cc0) Create stream I0320 22:28:33.872777 7 log.go:172] (0xc001b582c0) (0xc000d23cc0) Stream added, broadcasting: 1 I0320 22:28:33.874623 7 log.go:172] (0xc001b582c0) Reply frame received for 1 I0320 22:28:33.874673 7 log.go:172] (0xc001b582c0) (0xc000d59f40) Create stream I0320 22:28:33.874681 7 log.go:172] (0xc001b582c0) (0xc000d59f40) Stream added, broadcasting: 3 I0320 22:28:33.875519 7 log.go:172] (0xc001b582c0) Reply frame received for 3 I0320 22:28:33.875552 7 log.go:172] (0xc001b582c0) (0xc001a53040) Create stream I0320 22:28:33.875564 7 log.go:172] (0xc001b582c0) (0xc001a53040) Stream added, broadcasting: 5 I0320 22:28:33.876391 7 log.go:172] (0xc001b582c0) Reply frame received for 5 I0320 22:28:33.927795 7 log.go:172] (0xc001b582c0) Data frame received for 5 I0320 22:28:33.927840 7 log.go:172] (0xc001a53040) (5) Data frame handling I0320 22:28:33.927876 7 log.go:172] (0xc001b582c0) Data frame received for 3 I0320 22:28:33.927906 7 log.go:172] (0xc000d59f40) (3) Data frame handling I0320 22:28:33.927932 7 log.go:172] (0xc000d59f40) (3) Data frame sent I0320 22:28:33.927948 7 log.go:172] (0xc001b582c0) Data frame received for 3 I0320 22:28:33.927963 7 log.go:172] (0xc000d59f40) (3) Data frame handling I0320 22:28:33.929607 7 log.go:172] (0xc001b582c0) Data frame received for 1 I0320 22:28:33.929645 7 log.go:172] (0xc000d23cc0) (1) Data frame handling I0320 22:28:33.929674 7 log.go:172] (0xc000d23cc0) (1) Data frame sent I0320 22:28:33.929697 7 log.go:172] (0xc001b582c0) (0xc000d23cc0) Stream removed, broadcasting: 1 I0320 22:28:33.929716 7 log.go:172] (0xc001b582c0) Go away received I0320 22:28:33.929870 7 log.go:172] (0xc001b582c0) (0xc000d23cc0) Stream removed, broadcasting: 1 I0320 22:28:33.929903 7 log.go:172] (0xc001b582c0) (0xc000d59f40) Stream removed, broadcasting: 3 I0320 22:28:33.929929 7 log.go:172] (0xc001b582c0) (0xc001a53040) Stream removed, broadcasting: 5 Mar 20 22:28:33.929: INFO: Exec stderr: "" Mar 20 22:28:33.929: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1530 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:28:33.930: INFO: >>> kubeConfig: /root/.kube/config I0320 22:28:33.959858 7 log.go:172] (0xc001b589a0) (0xc00167c780) Create stream I0320 22:28:33.959888 7 log.go:172] (0xc001b589a0) (0xc00167c780) Stream added, broadcasting: 1 I0320 22:28:33.961993 7 log.go:172] (0xc001b589a0) Reply frame received for 1 I0320 22:28:33.962029 7 log.go:172] (0xc001b589a0) (0xc001be4000) Create stream I0320 22:28:33.962041 7 log.go:172] (0xc001b589a0) (0xc001be4000) Stream added, broadcasting: 3 I0320 22:28:33.963164 7 log.go:172] (0xc001b589a0) Reply frame received for 3 I0320 22:28:33.963205 7 log.go:172] (0xc001b589a0) (0xc000e600a0) Create stream I0320 22:28:33.963223 7 log.go:172] (0xc001b589a0) (0xc000e600a0) Stream added, broadcasting: 5 I0320 22:28:33.964180 7 log.go:172] (0xc001b589a0) Reply frame received for 5 I0320 22:28:34.029580 7 log.go:172] (0xc001b589a0) Data frame received for 3 I0320 22:28:34.029610 7 log.go:172] (0xc001be4000) (3) Data frame handling I0320 22:28:34.029624 7 log.go:172] (0xc001be4000) (3) Data frame sent I0320 22:28:34.029629 7 log.go:172] (0xc001b589a0) Data frame received for 3 I0320 22:28:34.029651 7 log.go:172] (0xc001b589a0) Data frame received for 5 I0320 22:28:34.029690 7 log.go:172] (0xc000e600a0) (5) Data frame handling I0320 22:28:34.029726 7 log.go:172] (0xc001be4000) (3) Data frame handling I0320 22:28:34.031240 7 log.go:172] (0xc001b589a0) Data frame received for 1 I0320 22:28:34.031263 7 log.go:172] (0xc00167c780) (1) Data frame handling I0320 22:28:34.031275 7 log.go:172] (0xc00167c780) (1) Data frame sent I0320 22:28:34.031284 7 log.go:172] (0xc001b589a0) (0xc00167c780) Stream removed, broadcasting: 1 I0320 22:28:34.031362 7 log.go:172] (0xc001b589a0) Go away received I0320 22:28:34.031411 7 log.go:172] (0xc001b589a0) (0xc00167c780) Stream removed, broadcasting: 1 I0320 22:28:34.031435 7 log.go:172] (0xc001b589a0) (0xc001be4000) Stream removed, broadcasting: 3 I0320 22:28:34.031450 7 log.go:172] (0xc001b589a0) (0xc000e600a0) Stream removed, broadcasting: 5 Mar 20 22:28:34.031: INFO: Exec stderr: "" Mar 20 22:28:34.031: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1530 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:28:34.031: INFO: >>> kubeConfig: /root/.kube/config I0320 22:28:34.068122 7 log.go:172] (0xc001b58fd0) (0xc00167cbe0) Create stream I0320 22:28:34.068157 7 log.go:172] (0xc001b58fd0) (0xc00167cbe0) Stream added, broadcasting: 1 I0320 22:28:34.070308 7 log.go:172] (0xc001b58fd0) Reply frame received for 1 I0320 22:28:34.070373 7 log.go:172] (0xc001b58fd0) (0xc001be4140) Create stream I0320 22:28:34.070387 7 log.go:172] (0xc001b58fd0) (0xc001be4140) Stream added, broadcasting: 3 I0320 22:28:34.071519 7 log.go:172] (0xc001b58fd0) Reply frame received for 3 I0320 22:28:34.071563 7 log.go:172] (0xc001b58fd0) (0xc001be4460) Create stream I0320 22:28:34.071579 7 log.go:172] (0xc001b58fd0) (0xc001be4460) Stream added, broadcasting: 5 I0320 22:28:34.072504 7 log.go:172] (0xc001b58fd0) Reply frame received for 5 I0320 22:28:34.130658 7 log.go:172] (0xc001b58fd0) Data frame received for 5 I0320 22:28:34.130701 7 log.go:172] (0xc001be4460) (5) Data frame handling I0320 22:28:34.130728 7 log.go:172] (0xc001b58fd0) Data frame received for 3 I0320 22:28:34.130758 7 log.go:172] (0xc001be4140) (3) Data frame handling I0320 22:28:34.130780 7 log.go:172] (0xc001be4140) (3) Data frame sent I0320 22:28:34.130839 7 log.go:172] (0xc001b58fd0) Data frame received for 3 I0320 22:28:34.130863 7 log.go:172] (0xc001be4140) (3) Data frame handling I0320 22:28:34.132011 7 log.go:172] (0xc001b58fd0) Data frame received for 1 I0320 22:28:34.132041 7 log.go:172] (0xc00167cbe0) (1) Data frame handling I0320 22:28:34.132069 7 log.go:172] (0xc00167cbe0) (1) Data frame sent I0320 22:28:34.132085 7 log.go:172] (0xc001b58fd0) (0xc00167cbe0) Stream removed, broadcasting: 1 I0320 22:28:34.132102 7 log.go:172] (0xc001b58fd0) Go away received I0320 22:28:34.132230 7 log.go:172] (0xc001b58fd0) (0xc00167cbe0) Stream removed, broadcasting: 1 I0320 22:28:34.132251 7 log.go:172] (0xc001b58fd0) (0xc001be4140) Stream removed, broadcasting: 3 I0320 22:28:34.132269 7 log.go:172] (0xc001b58fd0) (0xc001be4460) Stream removed, broadcasting: 5 Mar 20 22:28:34.132: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 20 22:28:34.132: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1530 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:28:34.132: INFO: >>> kubeConfig: /root/.kube/config I0320 22:28:34.168392 7 log.go:172] (0xc001b59600) (0xc00167cf00) Create stream I0320 22:28:34.168428 7 log.go:172] (0xc001b59600) (0xc00167cf00) Stream added, broadcasting: 1 I0320 22:28:34.170968 7 log.go:172] (0xc001b59600) Reply frame received for 1 I0320 22:28:34.171008 7 log.go:172] (0xc001b59600) (0xc00167cfa0) Create stream I0320 22:28:34.171026 7 log.go:172] (0xc001b59600) (0xc00167cfa0) Stream added, broadcasting: 3 I0320 22:28:34.172119 7 log.go:172] (0xc001b59600) Reply frame received for 3 I0320 22:28:34.172156 7 log.go:172] (0xc001b59600) (0xc00167d0e0) Create stream I0320 22:28:34.172171 7 log.go:172] (0xc001b59600) (0xc00167d0e0) Stream added, broadcasting: 5 I0320 22:28:34.173623 7 log.go:172] (0xc001b59600) Reply frame received for 5 I0320 22:28:34.241721 7 log.go:172] (0xc001b59600) Data frame received for 5 I0320 22:28:34.241763 7 log.go:172] (0xc00167d0e0) (5) Data frame handling I0320 22:28:34.241788 7 log.go:172] (0xc001b59600) Data frame received for 3 I0320 22:28:34.241801 7 log.go:172] (0xc00167cfa0) (3) Data frame handling I0320 22:28:34.241817 7 log.go:172] (0xc00167cfa0) (3) Data frame sent I0320 22:28:34.241830 7 log.go:172] (0xc001b59600) Data frame received for 3 I0320 22:28:34.241843 7 log.go:172] (0xc00167cfa0) (3) Data frame handling I0320 22:28:34.243191 7 log.go:172] (0xc001b59600) Data frame received for 1 I0320 22:28:34.243217 7 log.go:172] (0xc00167cf00) (1) Data frame handling I0320 22:28:34.243228 7 log.go:172] (0xc00167cf00) (1) Data frame sent I0320 22:28:34.243262 7 log.go:172] (0xc001b59600) (0xc00167cf00) Stream removed, broadcasting: 1 I0320 22:28:34.243285 7 log.go:172] (0xc001b59600) Go away received I0320 22:28:34.243473 7 log.go:172] (0xc001b59600) (0xc00167cf00) Stream removed, broadcasting: 1 I0320 22:28:34.243514 7 log.go:172] (0xc001b59600) (0xc00167cfa0) Stream removed, broadcasting: 3 I0320 22:28:34.243539 7 log.go:172] (0xc001b59600) (0xc00167d0e0) Stream removed, broadcasting: 5 Mar 20 22:28:34.243: INFO: Exec stderr: "" Mar 20 22:28:34.243: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1530 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:28:34.243: INFO: >>> kubeConfig: /root/.kube/config I0320 22:28:34.278129 7 log.go:172] (0xc002a18420) (0xc000e60820) Create stream I0320 22:28:34.278164 7 log.go:172] (0xc002a18420) (0xc000e60820) Stream added, broadcasting: 1 I0320 22:28:34.282292 7 log.go:172] (0xc002a18420) Reply frame received for 1 I0320 22:28:34.282382 7 log.go:172] (0xc002a18420) (0xc00167d180) Create stream I0320 22:28:34.282442 7 log.go:172] (0xc002a18420) (0xc00167d180) Stream added, broadcasting: 3 I0320 22:28:34.284167 7 log.go:172] (0xc002a18420) Reply frame received for 3 I0320 22:28:34.284198 7 log.go:172] (0xc002a18420) (0xc001be4640) Create stream I0320 22:28:34.284208 7 log.go:172] (0xc002a18420) (0xc001be4640) Stream added, broadcasting: 5 I0320 22:28:34.285372 7 log.go:172] (0xc002a18420) Reply frame received for 5 I0320 22:28:34.341609 7 log.go:172] (0xc002a18420) Data frame received for 3 I0320 22:28:34.341663 7 log.go:172] (0xc002a18420) Data frame received for 5 I0320 22:28:34.341703 7 log.go:172] (0xc001be4640) (5) Data frame handling I0320 22:28:34.341742 7 log.go:172] (0xc00167d180) (3) Data frame handling I0320 22:28:34.341768 7 log.go:172] (0xc00167d180) (3) Data frame sent I0320 22:28:34.341789 7 log.go:172] (0xc002a18420) Data frame received for 3 I0320 22:28:34.341813 7 log.go:172] (0xc00167d180) (3) Data frame handling I0320 22:28:34.343655 7 log.go:172] (0xc002a18420) Data frame received for 1 I0320 22:28:34.343697 7 log.go:172] (0xc000e60820) (1) Data frame handling I0320 22:28:34.343722 7 log.go:172] (0xc000e60820) (1) Data frame sent I0320 22:28:34.343741 7 log.go:172] (0xc002a18420) (0xc000e60820) Stream removed, broadcasting: 1 I0320 22:28:34.343760 7 log.go:172] (0xc002a18420) Go away received I0320 22:28:34.343914 7 log.go:172] (0xc002a18420) (0xc000e60820) Stream removed, broadcasting: 1 I0320 22:28:34.343943 7 log.go:172] (0xc002a18420) (0xc00167d180) Stream removed, broadcasting: 3 I0320 22:28:34.343966 7 log.go:172] (0xc002a18420) (0xc001be4640) Stream removed, broadcasting: 5 Mar 20 22:28:34.343: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 20 22:28:34.344: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1530 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:28:34.344: INFO: >>> kubeConfig: /root/.kube/config I0320 22:28:34.374827 7 log.go:172] (0xc002770630) (0xc001be4aa0) Create stream I0320 22:28:34.374856 7 log.go:172] (0xc002770630) (0xc001be4aa0) Stream added, broadcasting: 1 I0320 22:28:34.376980 7 log.go:172] (0xc002770630) Reply frame received for 1 I0320 22:28:34.377018 7 log.go:172] (0xc002770630) (0xc001be4be0) Create stream I0320 22:28:34.377032 7 log.go:172] (0xc002770630) (0xc001be4be0) Stream added, broadcasting: 3 I0320 22:28:34.378324 7 log.go:172] (0xc002770630) Reply frame received for 3 I0320 22:28:34.378357 7 log.go:172] (0xc002770630) (0xc001725ae0) Create stream I0320 22:28:34.378369 7 log.go:172] (0xc002770630) (0xc001725ae0) Stream added, broadcasting: 5 I0320 22:28:34.379263 7 log.go:172] (0xc002770630) Reply frame received for 5 I0320 22:28:34.446321 7 log.go:172] (0xc002770630) Data frame received for 5 I0320 22:28:34.446376 7 log.go:172] (0xc001725ae0) (5) Data frame handling I0320 22:28:34.446419 7 log.go:172] (0xc002770630) Data frame received for 3 I0320 22:28:34.446445 7 log.go:172] (0xc001be4be0) (3) Data frame handling I0320 22:28:34.446488 7 log.go:172] (0xc001be4be0) (3) Data frame sent I0320 22:28:34.446514 7 log.go:172] (0xc002770630) Data frame received for 3 I0320 22:28:34.446534 7 log.go:172] (0xc001be4be0) (3) Data frame handling I0320 22:28:34.448427 7 log.go:172] (0xc002770630) Data frame received for 1 I0320 22:28:34.448450 7 log.go:172] (0xc001be4aa0) (1) Data frame handling I0320 22:28:34.448461 7 log.go:172] (0xc001be4aa0) (1) Data frame sent I0320 22:28:34.448473 7 log.go:172] (0xc002770630) (0xc001be4aa0) Stream removed, broadcasting: 1 I0320 22:28:34.448483 7 log.go:172] (0xc002770630) Go away received I0320 22:28:34.448664 7 log.go:172] (0xc002770630) (0xc001be4aa0) Stream removed, broadcasting: 1 I0320 22:28:34.448709 7 log.go:172] (0xc002770630) (0xc001be4be0) Stream removed, broadcasting: 3 I0320 22:28:34.448743 7 log.go:172] (0xc002770630) (0xc001725ae0) Stream removed, broadcasting: 5 Mar 20 22:28:34.448: INFO: Exec stderr: "" Mar 20 22:28:34.448: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1530 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:28:34.448: INFO: >>> kubeConfig: /root/.kube/config I0320 22:28:34.480331 7 log.go:172] (0xc001d6e370) (0xc001504640) Create stream I0320 22:28:34.480364 7 log.go:172] (0xc001d6e370) (0xc001504640) Stream added, broadcasting: 1 I0320 22:28:34.486474 7 log.go:172] (0xc001d6e370) Reply frame received for 1 I0320 22:28:34.486508 7 log.go:172] (0xc001d6e370) (0xc00167d220) Create stream I0320 22:28:34.486516 7 log.go:172] (0xc001d6e370) (0xc00167d220) Stream added, broadcasting: 3 I0320 22:28:34.488334 7 log.go:172] (0xc001d6e370) Reply frame received for 3 I0320 22:28:34.488368 7 log.go:172] (0xc001d6e370) (0xc000e60a00) Create stream I0320 22:28:34.488385 7 log.go:172] (0xc001d6e370) (0xc000e60a00) Stream added, broadcasting: 5 I0320 22:28:34.489105 7 log.go:172] (0xc001d6e370) Reply frame received for 5 I0320 22:28:34.556673 7 log.go:172] (0xc001d6e370) Data frame received for 5 I0320 22:28:34.556724 7 log.go:172] (0xc000e60a00) (5) Data frame handling I0320 22:28:34.556765 7 log.go:172] (0xc001d6e370) Data frame received for 3 I0320 22:28:34.556788 7 log.go:172] (0xc00167d220) (3) Data frame handling I0320 22:28:34.556820 7 log.go:172] (0xc00167d220) (3) Data frame sent I0320 22:28:34.556845 7 log.go:172] (0xc001d6e370) Data frame received for 3 I0320 22:28:34.556861 7 log.go:172] (0xc00167d220) (3) Data frame handling I0320 22:28:34.558588 7 log.go:172] (0xc001d6e370) Data frame received for 1 I0320 22:28:34.558620 7 log.go:172] (0xc001504640) (1) Data frame handling I0320 22:28:34.558635 7 log.go:172] (0xc001504640) (1) Data frame sent I0320 22:28:34.558773 7 log.go:172] (0xc001d6e370) (0xc001504640) Stream removed, broadcasting: 1 I0320 22:28:34.558862 7 log.go:172] (0xc001d6e370) (0xc001504640) Stream removed, broadcasting: 1 I0320 22:28:34.558877 7 log.go:172] (0xc001d6e370) (0xc00167d220) Stream removed, broadcasting: 3 I0320 22:28:34.558888 7 log.go:172] (0xc001d6e370) (0xc000e60a00) Stream removed, broadcasting: 5 Mar 20 22:28:34.558: INFO: Exec stderr: "" Mar 20 22:28:34.558: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1530 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:28:34.558: INFO: >>> kubeConfig: /root/.kube/config I0320 22:28:34.560834 7 log.go:172] (0xc001d6e370) Go away received I0320 22:28:34.596679 7 log.go:172] (0xc001d6e6e0) (0xc001504f00) Create stream I0320 22:28:34.596732 7 log.go:172] (0xc001d6e6e0) (0xc001504f00) Stream added, broadcasting: 1 I0320 22:28:34.599308 7 log.go:172] (0xc001d6e6e0) Reply frame received for 1 I0320 22:28:34.599347 7 log.go:172] (0xc001d6e6e0) (0xc000e60aa0) Create stream I0320 22:28:34.599362 7 log.go:172] (0xc001d6e6e0) (0xc000e60aa0) Stream added, broadcasting: 3 I0320 22:28:34.600271 7 log.go:172] (0xc001d6e6e0) Reply frame received for 3 I0320 22:28:34.600314 7 log.go:172] (0xc001d6e6e0) (0xc00167d400) Create stream I0320 22:28:34.600341 7 log.go:172] (0xc001d6e6e0) (0xc00167d400) Stream added, broadcasting: 5 I0320 22:28:34.601484 7 log.go:172] (0xc001d6e6e0) Reply frame received for 5 I0320 22:28:34.666736 7 log.go:172] (0xc001d6e6e0) Data frame received for 5 I0320 22:28:34.666779 7 log.go:172] (0xc00167d400) (5) Data frame handling I0320 22:28:34.666799 7 log.go:172] (0xc001d6e6e0) Data frame received for 3 I0320 22:28:34.666837 7 log.go:172] (0xc000e60aa0) (3) Data frame handling I0320 22:28:34.666891 7 log.go:172] (0xc000e60aa0) (3) Data frame sent I0320 22:28:34.666924 7 log.go:172] (0xc001d6e6e0) Data frame received for 3 I0320 22:28:34.666941 7 log.go:172] (0xc000e60aa0) (3) Data frame handling I0320 22:28:34.675046 7 log.go:172] (0xc001d6e6e0) Data frame received for 1 I0320 22:28:34.675082 7 log.go:172] (0xc001504f00) (1) Data frame handling I0320 22:28:34.675130 7 log.go:172] (0xc001504f00) (1) Data frame sent I0320 22:28:34.675335 7 log.go:172] (0xc001d6e6e0) (0xc001504f00) Stream removed, broadcasting: 1 I0320 22:28:34.675405 7 log.go:172] (0xc001d6e6e0) (0xc001504f00) Stream removed, broadcasting: 1 I0320 22:28:34.675423 7 log.go:172] (0xc001d6e6e0) (0xc000e60aa0) Stream removed, broadcasting: 3 I0320 22:28:34.675433 7 log.go:172] (0xc001d6e6e0) (0xc00167d400) Stream removed, broadcasting: 5 Mar 20 22:28:34.675: INFO: Exec stderr: "" Mar 20 22:28:34.675: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1530 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 20 22:28:34.675: INFO: >>> kubeConfig: /root/.kube/config I0320 22:28:34.678059 7 log.go:172] (0xc001d6e6e0) Go away received I0320 22:28:34.708867 7 log.go:172] (0xc001b59970) (0xc00167d900) Create stream I0320 22:28:34.708895 7 log.go:172] (0xc001b59970) (0xc00167d900) Stream added, broadcasting: 1 I0320 22:28:34.710834 7 log.go:172] (0xc001b59970) Reply frame received for 1 I0320 22:28:34.710871 7 log.go:172] (0xc001b59970) (0xc001a530e0) Create stream I0320 22:28:34.710885 7 log.go:172] (0xc001b59970) (0xc001a530e0) Stream added, broadcasting: 3 I0320 22:28:34.711886 7 log.go:172] (0xc001b59970) Reply frame received for 3 I0320 22:28:34.711923 7 log.go:172] (0xc001b59970) (0xc000e60b40) Create stream I0320 22:28:34.711937 7 log.go:172] (0xc001b59970) (0xc000e60b40) Stream added, broadcasting: 5 I0320 22:28:34.712749 7 log.go:172] (0xc001b59970) Reply frame received for 5 I0320 22:28:34.765501 7 log.go:172] (0xc001b59970) Data frame received for 3 I0320 22:28:34.765520 7 log.go:172] (0xc001a530e0) (3) Data frame handling I0320 22:28:34.765538 7 log.go:172] (0xc001a530e0) (3) Data frame sent I0320 22:28:34.765699 7 log.go:172] (0xc001b59970) Data frame received for 5 I0320 22:28:34.765714 7 log.go:172] (0xc000e60b40) (5) Data frame handling I0320 22:28:34.765979 7 log.go:172] (0xc001b59970) Data frame received for 3 I0320 22:28:34.766006 7 log.go:172] (0xc001a530e0) (3) Data frame handling I0320 22:28:34.767604 7 log.go:172] (0xc001b59970) Data frame received for 1 I0320 22:28:34.767658 7 log.go:172] (0xc00167d900) (1) Data frame handling I0320 22:28:34.767681 7 log.go:172] (0xc00167d900) (1) Data frame sent I0320 22:28:34.767702 7 log.go:172] (0xc001b59970) (0xc00167d900) Stream removed, broadcasting: 1 I0320 22:28:34.767790 7 log.go:172] (0xc001b59970) Go away received I0320 22:28:34.767815 7 log.go:172] (0xc001b59970) (0xc00167d900) Stream removed, broadcasting: 1 I0320 22:28:34.767835 7 log.go:172] (0xc001b59970) (0xc001a530e0) Stream removed, broadcasting: 3 I0320 22:28:34.767849 7 log.go:172] (0xc001b59970) (0xc000e60b40) Stream removed, broadcasting: 5 Mar 20 22:28:34.767: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:28:34.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1530" for this suite. • [SLOW TEST:11.345 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4530,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:28:34.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 20 22:28:39.409: INFO: Successfully updated pod "annotationupdatecb846e61-9f5b-4a09-a79a-0cd0c80989d7" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:28:41.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1724" for this suite. • [SLOW TEST:6.690 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4554,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 20 22:28:41.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 20 22:28:41.556: INFO: Waiting up to 5m0s for pod "downward-api-19db7eec-72d1-4116-ba04-a576d013199b" in namespace "downward-api-8397" to be "success or failure" Mar 20 22:28:41.561: INFO: Pod "downward-api-19db7eec-72d1-4116-ba04-a576d013199b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.849802ms Mar 20 22:28:43.565: INFO: Pod "downward-api-19db7eec-72d1-4116-ba04-a576d013199b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009096187s Mar 20 22:28:45.570: INFO: Pod "downward-api-19db7eec-72d1-4116-ba04-a576d013199b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013404853s STEP: Saw pod success Mar 20 22:28:45.570: INFO: Pod "downward-api-19db7eec-72d1-4116-ba04-a576d013199b" satisfied condition "success or failure" Mar 20 22:28:45.572: INFO: Trying to get logs from node jerma-worker pod downward-api-19db7eec-72d1-4116-ba04-a576d013199b container dapi-container: STEP: delete the pod Mar 20 22:28:45.607: INFO: Waiting for pod downward-api-19db7eec-72d1-4116-ba04-a576d013199b to disappear Mar 20 22:28:45.614: INFO: Pod downward-api-19db7eec-72d1-4116-ba04-a576d013199b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 20 22:28:45.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8397" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4555,"failed":0} SSSSSSSSSSMar 20 22:28:45.620: INFO: Running AfterSuite actions on all nodes Mar 20 22:28:45.620: INFO: Running AfterSuite actions on node 1 Mar 20 22:28:45.620: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4565,"failed":0} Ran 278 of 4843 Specs in 4913.513 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4565 Skipped PASS