I0512 10:15:21.923876 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0512 10:15:21.924183 7 e2e.go:129] Starting e2e run "919723fc-fe63-4f23-8afc-842e4e80785e" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589278520 - Will randomize all specs Will run 288 of 5095 specs May 12 10:15:21.992: INFO: >>> kubeConfig: /root/.kube/config May 12 10:15:21.994: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 12 10:15:22.022: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 10:15:22.075: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 10:15:22.075: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 12 10:15:22.075: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 12 10:15:22.081: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 12 10:15:22.081: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 12 10:15:22.081: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 12 10:15:22.082: INFO: kube-apiserver version: v1.18.2 May 12 10:15:22.082: INFO: >>> kubeConfig: /root/.kube/config May 12 10:15:22.087: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:15:22.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap May 12 10:15:22.155: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-dd4310d6-ac7f-4a3c-bef5-977965738ce7 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-dd4310d6-ac7f-4a3c-bef5-977965738ce7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:15:32.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5286" for this suite. • [SLOW TEST:10.160 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":1,"skipped":10,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:15:32.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3346.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3346.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3346.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3346.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 10:15:43.016: INFO: DNS probes using dns-test-f2fa04d0-3407-4e9e-a0b8-ebfcf6491af4 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3346.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3346.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3346.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3346.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 10:15:56.084: INFO: File wheezy_udp@dns-test-service-3.dns-3346.svc.cluster.local from pod dns-3346/dns-test-86d6968c-ca1d-4563-bfd5-42c7e5659eb5 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:15:56.087: INFO: File jessie_udp@dns-test-service-3.dns-3346.svc.cluster.local from pod dns-3346/dns-test-86d6968c-ca1d-4563-bfd5-42c7e5659eb5 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:15:56.087: INFO: Lookups using dns-3346/dns-test-86d6968c-ca1d-4563-bfd5-42c7e5659eb5 failed for: [wheezy_udp@dns-test-service-3.dns-3346.svc.cluster.local jessie_udp@dns-test-service-3.dns-3346.svc.cluster.local] May 12 10:16:01.110: INFO: File wheezy_udp@dns-test-service-3.dns-3346.svc.cluster.local from pod dns-3346/dns-test-86d6968c-ca1d-4563-bfd5-42c7e5659eb5 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:16:01.114: INFO: File jessie_udp@dns-test-service-3.dns-3346.svc.cluster.local from pod dns-3346/dns-test-86d6968c-ca1d-4563-bfd5-42c7e5659eb5 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:16:01.114: INFO: Lookups using dns-3346/dns-test-86d6968c-ca1d-4563-bfd5-42c7e5659eb5 failed for: [wheezy_udp@dns-test-service-3.dns-3346.svc.cluster.local jessie_udp@dns-test-service-3.dns-3346.svc.cluster.local] May 12 10:16:06.090: INFO: File wheezy_udp@dns-test-service-3.dns-3346.svc.cluster.local from pod dns-3346/dns-test-86d6968c-ca1d-4563-bfd5-42c7e5659eb5 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:16:06.092: INFO: File jessie_udp@dns-test-service-3.dns-3346.svc.cluster.local from pod dns-3346/dns-test-86d6968c-ca1d-4563-bfd5-42c7e5659eb5 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:16:06.092: INFO: Lookups using dns-3346/dns-test-86d6968c-ca1d-4563-bfd5-42c7e5659eb5 failed for: [wheezy_udp@dns-test-service-3.dns-3346.svc.cluster.local jessie_udp@dns-test-service-3.dns-3346.svc.cluster.local] May 12 10:16:11.097: INFO: File jessie_udp@dns-test-service-3.dns-3346.svc.cluster.local from pod dns-3346/dns-test-86d6968c-ca1d-4563-bfd5-42c7e5659eb5 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:16:11.097: INFO: Lookups using dns-3346/dns-test-86d6968c-ca1d-4563-bfd5-42c7e5659eb5 failed for: [jessie_udp@dns-test-service-3.dns-3346.svc.cluster.local] May 12 10:16:16.130: INFO: DNS probes using dns-test-86d6968c-ca1d-4563-bfd5-42c7e5659eb5 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3346.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3346.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3346.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3346.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 10:16:29.408: INFO: DNS probes using dns-test-12d416d8-0258-4903-a608-efdecd142cda succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:16:30.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3346" for this suite. • [SLOW TEST:58.596 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":2,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:16:30.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 10:16:32.259: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 10:16:34.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875392, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875392, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875392, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875391, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:16:36.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875392, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875392, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875392, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875391, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 10:16:39.557: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 12 10:16:39.579: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:16:39.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7356" for this suite. STEP: Destroying namespace "webhook-7356-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.024 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":3,"skipped":39,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:16:39.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4787 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4787 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4787 May 12 10:16:40.313: INFO: Found 0 stateful pods, waiting for 1 May 12 10:16:50.361: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 12 10:16:50.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4787 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 10:17:04.968: INFO: stderr: "I0512 10:17:04.748870 30 log.go:172] (0xc00003a6e0) (0xc0006bc640) Create stream\nI0512 10:17:04.748925 30 log.go:172] (0xc00003a6e0) (0xc0006bc640) Stream added, broadcasting: 1\nI0512 10:17:04.751561 30 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0512 10:17:04.751594 30 log.go:172] (0xc00003a6e0) (0xc0006b2640) Create stream\nI0512 10:17:04.751606 30 log.go:172] (0xc00003a6e0) (0xc0006b2640) Stream added, broadcasting: 3\nI0512 10:17:04.752440 30 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0512 10:17:04.752470 30 log.go:172] (0xc00003a6e0) (0xc0006bcf00) Create stream\nI0512 10:17:04.752484 30 log.go:172] (0xc00003a6e0) (0xc0006bcf00) Stream added, broadcasting: 5\nI0512 10:17:04.753708 30 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0512 10:17:04.935784 30 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:17:04.935818 30 log.go:172] (0xc0006bcf00) (5) Data frame handling\nI0512 10:17:04.935839 30 log.go:172] (0xc0006bcf00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 10:17:04.963400 30 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:17:04.963431 30 log.go:172] (0xc0006b2640) (3) Data frame handling\nI0512 10:17:04.963455 30 log.go:172] (0xc0006b2640) (3) Data frame sent\nI0512 10:17:04.963469 30 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:17:04.963481 30 log.go:172] (0xc0006b2640) (3) Data frame handling\nI0512 10:17:04.963759 30 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:17:04.963771 30 log.go:172] (0xc0006bcf00) (5) Data frame handling\nI0512 10:17:04.964933 30 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0512 10:17:04.964952 30 log.go:172] (0xc0006bc640) (1) Data frame handling\nI0512 10:17:04.964968 30 log.go:172] (0xc0006bc640) (1) Data frame sent\nI0512 10:17:04.965001 30 log.go:172] (0xc00003a6e0) (0xc0006bc640) Stream removed, broadcasting: 1\nI0512 10:17:04.965066 30 log.go:172] (0xc00003a6e0) Go away received\nI0512 10:17:04.965385 30 log.go:172] (0xc00003a6e0) (0xc0006bc640) Stream removed, broadcasting: 1\nI0512 10:17:04.965403 30 log.go:172] (0xc00003a6e0) (0xc0006b2640) Stream removed, broadcasting: 3\nI0512 10:17:04.965413 30 log.go:172] (0xc00003a6e0) (0xc0006bcf00) Stream removed, broadcasting: 5\n" May 12 10:17:04.968: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 10:17:04.968: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 10:17:04.973: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 10:17:15.052: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 10:17:15.052: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:17:15.182: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999933s May 12 10:17:16.186: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.899711959s May 12 10:17:17.202: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.89576961s May 12 10:17:18.565: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.879920265s May 12 10:17:19.655: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.516490648s May 12 10:17:20.727: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.426731684s May 12 10:17:21.745: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.354742725s May 12 10:17:22.753: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.337043894s May 12 10:17:23.758: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.328657774s May 12 10:17:24.761: INFO: Verifying statefulset ss doesn't scale past 1 for another 324.145234ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4787 May 12 10:17:25.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 10:17:25.933: INFO: stderr: "I0512 10:17:25.880391 63 log.go:172] (0xc000abba20) (0xc000a54460) Create stream\nI0512 10:17:25.880440 63 log.go:172] (0xc000abba20) (0xc000a54460) Stream added, broadcasting: 1\nI0512 10:17:25.883547 63 log.go:172] (0xc000abba20) Reply frame received for 1\nI0512 10:17:25.883587 63 log.go:172] (0xc000abba20) (0xc0008246e0) Create stream\nI0512 10:17:25.883597 63 log.go:172] (0xc000abba20) (0xc0008246e0) Stream added, broadcasting: 3\nI0512 10:17:25.884285 63 log.go:172] (0xc000abba20) Reply frame received for 3\nI0512 10:17:25.884310 63 log.go:172] (0xc000abba20) (0xc000584f00) Create stream\nI0512 10:17:25.884319 63 log.go:172] (0xc000abba20) (0xc000584f00) Stream added, broadcasting: 5\nI0512 10:17:25.884827 63 log.go:172] (0xc000abba20) Reply frame received for 5\nI0512 10:17:25.926931 63 log.go:172] (0xc000abba20) Data frame received for 3\nI0512 10:17:25.926976 63 log.go:172] (0xc0008246e0) (3) Data frame handling\nI0512 10:17:25.927009 63 log.go:172] (0xc0008246e0) (3) Data frame sent\nI0512 10:17:25.927030 63 log.go:172] (0xc000abba20) Data frame received for 5\nI0512 10:17:25.927043 63 log.go:172] (0xc000584f00) (5) Data frame handling\nI0512 10:17:25.927054 63 log.go:172] (0xc000584f00) (5) Data frame sent\nI0512 10:17:25.927065 63 log.go:172] (0xc000abba20) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 10:17:25.927074 63 log.go:172] (0xc000584f00) (5) Data frame handling\nI0512 10:17:25.927092 63 log.go:172] (0xc000abba20) Data frame received for 3\nI0512 10:17:25.927099 63 log.go:172] (0xc0008246e0) (3) Data frame handling\nI0512 10:17:25.928339 63 log.go:172] (0xc000abba20) Data frame received for 1\nI0512 10:17:25.928360 63 log.go:172] (0xc000a54460) (1) Data frame handling\nI0512 10:17:25.928383 63 log.go:172] (0xc000a54460) (1) Data frame sent\nI0512 10:17:25.928399 63 log.go:172] (0xc000abba20) (0xc000a54460) Stream removed, broadcasting: 1\nI0512 10:17:25.928417 63 log.go:172] (0xc000abba20) Go away received\nI0512 10:17:25.928908 63 log.go:172] (0xc000abba20) (0xc000a54460) Stream removed, broadcasting: 1\nI0512 10:17:25.928944 63 log.go:172] (0xc000abba20) (0xc0008246e0) Stream removed, broadcasting: 3\nI0512 10:17:25.928957 63 log.go:172] (0xc000abba20) (0xc000584f00) Stream removed, broadcasting: 5\n" May 12 10:17:25.933: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 10:17:25.933: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 10:17:25.936: INFO: Found 1 stateful pods, waiting for 3 May 12 10:17:35.942: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:17:35.942: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:17:35.942: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 10:17:45.967: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:17:45.967: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:17:45.967: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 12 10:17:45.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4787 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 10:17:46.174: INFO: stderr: "I0512 10:17:46.105032 83 log.go:172] (0xc000a156b0) (0xc0006e9b80) Create stream\nI0512 10:17:46.105091 83 log.go:172] (0xc000a156b0) (0xc0006e9b80) Stream added, broadcasting: 1\nI0512 10:17:46.108043 83 log.go:172] (0xc000a156b0) Reply frame received for 1\nI0512 10:17:46.108080 83 log.go:172] (0xc000a156b0) (0xc0008025a0) Create stream\nI0512 10:17:46.108092 83 log.go:172] (0xc000a156b0) (0xc0008025a0) Stream added, broadcasting: 3\nI0512 10:17:46.109708 83 log.go:172] (0xc000a156b0) Reply frame received for 3\nI0512 10:17:46.109741 83 log.go:172] (0xc000a156b0) (0xc00080ce60) Create stream\nI0512 10:17:46.109750 83 log.go:172] (0xc000a156b0) (0xc00080ce60) Stream added, broadcasting: 5\nI0512 10:17:46.110583 83 log.go:172] (0xc000a156b0) Reply frame received for 5\nI0512 10:17:46.170298 83 log.go:172] (0xc000a156b0) Data frame received for 5\nI0512 10:17:46.170326 83 log.go:172] (0xc00080ce60) (5) Data frame handling\nI0512 10:17:46.170336 83 log.go:172] (0xc00080ce60) (5) Data frame sent\nI0512 10:17:46.170344 83 log.go:172] (0xc000a156b0) Data frame received for 5\nI0512 10:17:46.170351 83 log.go:172] (0xc00080ce60) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 10:17:46.170371 83 log.go:172] (0xc000a156b0) Data frame received for 3\nI0512 10:17:46.170378 83 log.go:172] (0xc0008025a0) (3) Data frame handling\nI0512 10:17:46.170391 83 log.go:172] (0xc0008025a0) (3) Data frame sent\nI0512 10:17:46.170410 83 log.go:172] (0xc000a156b0) Data frame received for 3\nI0512 10:17:46.170434 83 log.go:172] (0xc0008025a0) (3) Data frame handling\nI0512 10:17:46.171344 83 log.go:172] (0xc000a156b0) Data frame received for 1\nI0512 10:17:46.171359 83 log.go:172] (0xc0006e9b80) (1) Data frame handling\nI0512 10:17:46.171368 83 log.go:172] (0xc0006e9b80) (1) Data frame sent\nI0512 10:17:46.171378 83 log.go:172] (0xc000a156b0) (0xc0006e9b80) Stream removed, broadcasting: 1\nI0512 10:17:46.171395 83 log.go:172] (0xc000a156b0) Go away received\nI0512 10:17:46.171690 83 log.go:172] (0xc000a156b0) (0xc0006e9b80) Stream removed, broadcasting: 1\nI0512 10:17:46.171710 83 log.go:172] (0xc000a156b0) (0xc0008025a0) Stream removed, broadcasting: 3\nI0512 10:17:46.171722 83 log.go:172] (0xc000a156b0) (0xc00080ce60) Stream removed, broadcasting: 5\n" May 12 10:17:46.174: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 10:17:46.174: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 10:17:46.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4787 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 10:17:46.422: INFO: stderr: "I0512 10:17:46.322214 103 log.go:172] (0xc000ac7810) (0xc0009b4780) Create stream\nI0512 10:17:46.322282 103 log.go:172] (0xc000ac7810) (0xc0009b4780) Stream added, broadcasting: 1\nI0512 10:17:46.328062 103 log.go:172] (0xc000ac7810) Reply frame received for 1\nI0512 10:17:46.328104 103 log.go:172] (0xc000ac7810) (0xc0006f4460) Create stream\nI0512 10:17:46.328117 103 log.go:172] (0xc000ac7810) (0xc0006f4460) Stream added, broadcasting: 3\nI0512 10:17:46.328988 103 log.go:172] (0xc000ac7810) Reply frame received for 3\nI0512 10:17:46.329047 103 log.go:172] (0xc000ac7810) (0xc0006f4d20) Create stream\nI0512 10:17:46.329068 103 log.go:172] (0xc000ac7810) (0xc0006f4d20) Stream added, broadcasting: 5\nI0512 10:17:46.330071 103 log.go:172] (0xc000ac7810) Reply frame received for 5\nI0512 10:17:46.380888 103 log.go:172] (0xc000ac7810) Data frame received for 5\nI0512 10:17:46.380914 103 log.go:172] (0xc0006f4d20) (5) Data frame handling\nI0512 10:17:46.380932 103 log.go:172] (0xc0006f4d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 10:17:46.415705 103 log.go:172] (0xc000ac7810) Data frame received for 3\nI0512 10:17:46.415736 103 log.go:172] (0xc0006f4460) (3) Data frame handling\nI0512 10:17:46.415753 103 log.go:172] (0xc0006f4460) (3) Data frame sent\nI0512 10:17:46.415913 103 log.go:172] (0xc000ac7810) Data frame received for 5\nI0512 10:17:46.415960 103 log.go:172] (0xc0006f4d20) (5) Data frame handling\nI0512 10:17:46.415999 103 log.go:172] (0xc000ac7810) Data frame received for 3\nI0512 10:17:46.416029 103 log.go:172] (0xc0006f4460) (3) Data frame handling\nI0512 10:17:46.417932 103 log.go:172] (0xc000ac7810) Data frame received for 1\nI0512 10:17:46.417942 103 log.go:172] (0xc0009b4780) (1) Data frame handling\nI0512 10:17:46.417948 103 log.go:172] (0xc0009b4780) (1) Data frame sent\nI0512 10:17:46.417957 103 log.go:172] (0xc000ac7810) (0xc0009b4780) Stream removed, broadcasting: 1\nI0512 10:17:46.417966 103 log.go:172] (0xc000ac7810) Go away received\nI0512 10:17:46.418420 103 log.go:172] (0xc000ac7810) (0xc0009b4780) Stream removed, broadcasting: 1\nI0512 10:17:46.418453 103 log.go:172] (0xc000ac7810) (0xc0006f4460) Stream removed, broadcasting: 3\nI0512 10:17:46.418474 103 log.go:172] (0xc000ac7810) (0xc0006f4d20) Stream removed, broadcasting: 5\n" May 12 10:17:46.422: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 10:17:46.422: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 10:17:46.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4787 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 10:17:46.631: INFO: stderr: "I0512 10:17:46.547875 122 log.go:172] (0xc000b31340) (0xc000bec460) Create stream\nI0512 10:17:46.547940 122 log.go:172] (0xc000b31340) (0xc000bec460) Stream added, broadcasting: 1\nI0512 10:17:46.552902 122 log.go:172] (0xc000b31340) Reply frame received for 1\nI0512 10:17:46.552974 122 log.go:172] (0xc000b31340) (0xc00082cb40) Create stream\nI0512 10:17:46.552989 122 log.go:172] (0xc000b31340) (0xc00082cb40) Stream added, broadcasting: 3\nI0512 10:17:46.554482 122 log.go:172] (0xc000b31340) Reply frame received for 3\nI0512 10:17:46.554540 122 log.go:172] (0xc000b31340) (0xc00083c000) Create stream\nI0512 10:17:46.554557 122 log.go:172] (0xc000b31340) (0xc00083c000) Stream added, broadcasting: 5\nI0512 10:17:46.555517 122 log.go:172] (0xc000b31340) Reply frame received for 5\nI0512 10:17:46.603270 122 log.go:172] (0xc000b31340) Data frame received for 5\nI0512 10:17:46.603287 122 log.go:172] (0xc00083c000) (5) Data frame handling\nI0512 10:17:46.603297 122 log.go:172] (0xc00083c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 10:17:46.623709 122 log.go:172] (0xc000b31340) Data frame received for 3\nI0512 10:17:46.623723 122 log.go:172] (0xc00082cb40) (3) Data frame handling\nI0512 10:17:46.623730 122 log.go:172] (0xc00082cb40) (3) Data frame sent\nI0512 10:17:46.624226 122 log.go:172] (0xc000b31340) Data frame received for 5\nI0512 10:17:46.624243 122 log.go:172] (0xc00083c000) (5) Data frame handling\nI0512 10:17:46.624265 122 log.go:172] (0xc000b31340) Data frame received for 3\nI0512 10:17:46.624275 122 log.go:172] (0xc00082cb40) (3) Data frame handling\nI0512 10:17:46.626254 122 log.go:172] (0xc000b31340) Data frame received for 1\nI0512 10:17:46.626288 122 log.go:172] (0xc000bec460) (1) Data frame handling\nI0512 10:17:46.626318 122 log.go:172] (0xc000bec460) (1) Data frame sent\nI0512 10:17:46.626346 122 log.go:172] (0xc000b31340) (0xc000bec460) Stream removed, broadcasting: 1\nI0512 10:17:46.626371 122 log.go:172] (0xc000b31340) Go away received\nI0512 10:17:46.626810 122 log.go:172] (0xc000b31340) (0xc000bec460) Stream removed, broadcasting: 1\nI0512 10:17:46.626845 122 log.go:172] (0xc000b31340) (0xc00082cb40) Stream removed, broadcasting: 3\nI0512 10:17:46.626864 122 log.go:172] (0xc000b31340) (0xc00083c000) Stream removed, broadcasting: 5\n" May 12 10:17:46.631: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 10:17:46.631: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 10:17:46.631: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:17:46.650: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 12 10:17:56.997: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 10:17:56.997: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 10:17:56.997: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 10:17:57.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999364s May 12 10:17:58.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.719392293s May 12 10:17:59.565: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.44373101s May 12 10:18:00.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.438808653s May 12 10:18:01.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.434980558s May 12 10:18:02.579: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.429233705s May 12 10:18:03.583: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.425092208s May 12 10:18:04.607: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.420648535s May 12 10:18:05.656: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.396290555s May 12 10:18:06.660: INFO: Verifying statefulset ss doesn't scale past 3 for another 348.104244ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4787 May 12 10:18:07.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 10:18:07.864: INFO: stderr: "I0512 10:18:07.796337 143 log.go:172] (0xc000a5fe40) (0xc000804b40) Create stream\nI0512 10:18:07.796402 143 log.go:172] (0xc000a5fe40) (0xc000804b40) Stream added, broadcasting: 1\nI0512 10:18:07.798666 143 log.go:172] (0xc000a5fe40) Reply frame received for 1\nI0512 10:18:07.798751 143 log.go:172] (0xc000a5fe40) (0xc000805040) Create stream\nI0512 10:18:07.798792 143 log.go:172] (0xc000a5fe40) (0xc000805040) Stream added, broadcasting: 3\nI0512 10:18:07.799858 143 log.go:172] (0xc000a5fe40) Reply frame received for 3\nI0512 10:18:07.799899 143 log.go:172] (0xc000a5fe40) (0xc00084c6e0) Create stream\nI0512 10:18:07.799923 143 log.go:172] (0xc000a5fe40) (0xc00084c6e0) Stream added, broadcasting: 5\nI0512 10:18:07.801647 143 log.go:172] (0xc000a5fe40) Reply frame received for 5\nI0512 10:18:07.856693 143 log.go:172] (0xc000a5fe40) Data frame received for 3\nI0512 10:18:07.856713 143 log.go:172] (0xc000805040) (3) Data frame handling\nI0512 10:18:07.856741 143 log.go:172] (0xc000a5fe40) Data frame received for 5\nI0512 10:18:07.856774 143 log.go:172] (0xc00084c6e0) (5) Data frame handling\nI0512 10:18:07.856792 143 log.go:172] (0xc00084c6e0) (5) Data frame sent\nI0512 10:18:07.856804 143 log.go:172] (0xc000a5fe40) Data frame received for 5\nI0512 10:18:07.856833 143 log.go:172] (0xc00084c6e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 10:18:07.856864 143 log.go:172] (0xc000805040) (3) Data frame sent\nI0512 10:18:07.856889 143 log.go:172] (0xc000a5fe40) Data frame received for 3\nI0512 10:18:07.856899 143 log.go:172] (0xc000805040) (3) Data frame handling\nI0512 10:18:07.858547 143 log.go:172] (0xc000a5fe40) Data frame received for 1\nI0512 10:18:07.858585 143 log.go:172] (0xc000804b40) (1) Data frame handling\nI0512 10:18:07.858612 143 log.go:172] (0xc000804b40) (1) Data frame sent\nI0512 10:18:07.858798 143 log.go:172] (0xc000a5fe40) (0xc000804b40) Stream removed, broadcasting: 1\nI0512 10:18:07.858825 143 log.go:172] (0xc000a5fe40) Go away received\nI0512 10:18:07.859212 143 log.go:172] (0xc000a5fe40) (0xc000804b40) Stream removed, broadcasting: 1\nI0512 10:18:07.859229 143 log.go:172] (0xc000a5fe40) (0xc000805040) Stream removed, broadcasting: 3\nI0512 10:18:07.859238 143 log.go:172] (0xc000a5fe40) (0xc00084c6e0) Stream removed, broadcasting: 5\n" May 12 10:18:07.864: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 10:18:07.864: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 10:18:07.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 10:18:08.054: INFO: stderr: "I0512 10:18:07.983588 163 log.go:172] (0xc000d02c60) (0xc0003c01e0) Create stream\nI0512 10:18:07.983648 163 log.go:172] (0xc000d02c60) (0xc0003c01e0) Stream added, broadcasting: 1\nI0512 10:18:07.986575 163 log.go:172] (0xc000d02c60) Reply frame received for 1\nI0512 10:18:07.986617 163 log.go:172] (0xc000d02c60) (0xc0002900a0) Create stream\nI0512 10:18:07.986640 163 log.go:172] (0xc000d02c60) (0xc0002900a0) Stream added, broadcasting: 3\nI0512 10:18:07.987537 163 log.go:172] (0xc000d02c60) Reply frame received for 3\nI0512 10:18:07.987578 163 log.go:172] (0xc000d02c60) (0xc0003c0820) Create stream\nI0512 10:18:07.987589 163 log.go:172] (0xc000d02c60) (0xc0003c0820) Stream added, broadcasting: 5\nI0512 10:18:07.988551 163 log.go:172] (0xc000d02c60) Reply frame received for 5\nI0512 10:18:08.048347 163 log.go:172] (0xc000d02c60) Data frame received for 5\nI0512 10:18:08.048398 163 log.go:172] (0xc000d02c60) Data frame received for 3\nI0512 10:18:08.048445 163 log.go:172] (0xc0002900a0) (3) Data frame handling\nI0512 10:18:08.048464 163 log.go:172] (0xc0002900a0) (3) Data frame sent\nI0512 10:18:08.048478 163 log.go:172] (0xc0003c0820) (5) Data frame handling\nI0512 10:18:08.048520 163 log.go:172] (0xc0003c0820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 10:18:08.048539 163 log.go:172] (0xc000d02c60) Data frame received for 5\nI0512 10:18:08.048549 163 log.go:172] (0xc0003c0820) (5) Data frame handling\nI0512 10:18:08.048570 163 log.go:172] (0xc000d02c60) Data frame received for 3\nI0512 10:18:08.048583 163 log.go:172] (0xc0002900a0) (3) Data frame handling\nI0512 10:18:08.049994 163 log.go:172] (0xc000d02c60) Data frame received for 1\nI0512 10:18:08.050047 163 log.go:172] (0xc0003c01e0) (1) Data frame handling\nI0512 10:18:08.050065 163 log.go:172] (0xc0003c01e0) (1) Data frame sent\nI0512 10:18:08.050076 163 log.go:172] (0xc000d02c60) (0xc0003c01e0) Stream removed, broadcasting: 1\nI0512 10:18:08.050087 163 log.go:172] (0xc000d02c60) Go away received\nI0512 10:18:08.050499 163 log.go:172] (0xc000d02c60) (0xc0003c01e0) Stream removed, broadcasting: 1\nI0512 10:18:08.050518 163 log.go:172] (0xc000d02c60) (0xc0002900a0) Stream removed, broadcasting: 3\nI0512 10:18:08.050528 163 log.go:172] (0xc000d02c60) (0xc0003c0820) Stream removed, broadcasting: 5\n" May 12 10:18:08.055: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 10:18:08.055: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 10:18:08.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4787 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 10:18:08.348: INFO: stderr: "I0512 10:18:08.268965 183 log.go:172] (0xc000afb550) (0xc000700e60) Create stream\nI0512 10:18:08.269021 183 log.go:172] (0xc000afb550) (0xc000700e60) Stream added, broadcasting: 1\nI0512 10:18:08.278896 183 log.go:172] (0xc000afb550) Reply frame received for 1\nI0512 10:18:08.278967 183 log.go:172] (0xc000afb550) (0xc000842500) Create stream\nI0512 10:18:08.278987 183 log.go:172] (0xc000afb550) (0xc000842500) Stream added, broadcasting: 3\nI0512 10:18:08.280049 183 log.go:172] (0xc000afb550) Reply frame received for 3\nI0512 10:18:08.280077 183 log.go:172] (0xc000afb550) (0xc000698e60) Create stream\nI0512 10:18:08.280086 183 log.go:172] (0xc000afb550) (0xc000698e60) Stream added, broadcasting: 5\nI0512 10:18:08.280806 183 log.go:172] (0xc000afb550) Reply frame received for 5\nI0512 10:18:08.340753 183 log.go:172] (0xc000afb550) Data frame received for 5\nI0512 10:18:08.340792 183 log.go:172] (0xc000698e60) (5) Data frame handling\nI0512 10:18:08.340808 183 log.go:172] (0xc000698e60) (5) Data frame sent\nI0512 10:18:08.340821 183 log.go:172] (0xc000afb550) Data frame received for 5\nI0512 10:18:08.340831 183 log.go:172] (0xc000698e60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 10:18:08.340866 183 log.go:172] (0xc000afb550) Data frame received for 3\nI0512 10:18:08.340878 183 log.go:172] (0xc000842500) (3) Data frame handling\nI0512 10:18:08.340897 183 log.go:172] (0xc000842500) (3) Data frame sent\nI0512 10:18:08.340909 183 log.go:172] (0xc000afb550) Data frame received for 3\nI0512 10:18:08.340920 183 log.go:172] (0xc000842500) (3) Data frame handling\nI0512 10:18:08.342158 183 log.go:172] (0xc000afb550) Data frame received for 1\nI0512 10:18:08.342181 183 log.go:172] (0xc000700e60) (1) Data frame handling\nI0512 10:18:08.342198 183 log.go:172] (0xc000700e60) (1) Data frame sent\nI0512 10:18:08.342216 183 log.go:172] (0xc000afb550) (0xc000700e60) Stream removed, broadcasting: 1\nI0512 10:18:08.342251 183 log.go:172] (0xc000afb550) Go away received\nI0512 10:18:08.344767 183 log.go:172] (0xc000afb550) (0xc000700e60) Stream removed, broadcasting: 1\nI0512 10:18:08.345016 183 log.go:172] (0xc000afb550) (0xc000842500) Stream removed, broadcasting: 3\nI0512 10:18:08.345034 183 log.go:172] (0xc000afb550) (0xc000698e60) Stream removed, broadcasting: 5\n" May 12 10:18:08.349: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 10:18:08.349: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 10:18:08.349: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 12 10:18:38.383: INFO: Deleting all statefulset in ns statefulset-4787 May 12 10:18:38.386: INFO: Scaling statefulset ss to 0 May 12 10:18:38.396: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:18:38.399: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:18:38.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4787" for this suite. • [SLOW TEST:118.651 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":4,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:18:38.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-n42g STEP: Creating a pod to test atomic-volume-subpath May 12 10:18:38.695: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-n42g" in namespace "subpath-8941" to be "Succeeded or Failed" May 12 10:18:38.713: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Pending", Reason="", readiness=false. Elapsed: 18.073966ms May 12 10:18:40.733: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038113693s May 12 10:18:42.736: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041025101s May 12 10:18:44.740: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Running", Reason="", readiness=true. Elapsed: 6.044538363s May 12 10:18:46.743: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Running", Reason="", readiness=true. Elapsed: 8.048420452s May 12 10:18:48.895: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Running", Reason="", readiness=true. Elapsed: 10.19991758s May 12 10:18:50.899: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Running", Reason="", readiness=true. Elapsed: 12.204383513s May 12 10:18:52.903: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Running", Reason="", readiness=true. Elapsed: 14.208350961s May 12 10:18:54.906: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Running", Reason="", readiness=true. Elapsed: 16.211371578s May 12 10:18:56.997: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Running", Reason="", readiness=true. Elapsed: 18.302172673s May 12 10:18:59.178: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Running", Reason="", readiness=true. Elapsed: 20.483076556s May 12 10:19:01.184: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Running", Reason="", readiness=true. Elapsed: 22.4894365s May 12 10:19:03.262: INFO: Pod "pod-subpath-test-downwardapi-n42g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.567028497s STEP: Saw pod success May 12 10:19:03.262: INFO: Pod "pod-subpath-test-downwardapi-n42g" satisfied condition "Succeeded or Failed" May 12 10:19:03.264: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-n42g container test-container-subpath-downwardapi-n42g: STEP: delete the pod May 12 10:19:03.979: INFO: Waiting for pod pod-subpath-test-downwardapi-n42g to disappear May 12 10:19:03.981: INFO: Pod pod-subpath-test-downwardapi-n42g no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-n42g May 12 10:19:03.981: INFO: Deleting pod "pod-subpath-test-downwardapi-n42g" in namespace "subpath-8941" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:19:04.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8941" for this suite. • [SLOW TEST:25.984 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":5,"skipped":83,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:19:04.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:19:06.101: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:19:07.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-87" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":6,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:19:07.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 12 10:19:08.672: INFO: PodSpec: initContainers in spec.initContainers May 12 10:20:07.876: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-cbf66cf1-fa24-4a69-b5f3-be1eaf86f74f", GenerateName:"", Namespace:"init-container-550", SelfLink:"/api/v1/namespaces/init-container-550/pods/pod-init-cbf66cf1-fa24-4a69-b5f3-be1eaf86f74f", UID:"a49ee6f3-d34e-4d8b-ae48-56dffb8c4c09", ResourceVersion:"3774164", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724875548, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"672142151"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001f5a340), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001f5a3c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001f5a3e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001f5a460)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ps4cf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001580380), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ps4cf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ps4cf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ps4cf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002bf80a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002d64000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002bf8290)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002bf82e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002bf82e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002bf82ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875549, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875549, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875549, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875548, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.169", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.169"}}, StartTime:(*v1.Time)(0xc001f5a4a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001f5a580), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002d640e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://38331e9caa67540e83888410000e0e3dd0f784775c58ea68b353a344dae09621", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f5a5a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f5a540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002bf840f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:20:07.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-550" for this suite. • [SLOW TEST:60.454 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":7,"skipped":129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:20:08.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-8b2c9c37-7557-4368-8bd8-48b506ba9270 STEP: Creating a pod to test consume configMaps May 12 10:20:08.472: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e98da579-1390-41d8-8cfe-c15b5b41ecca" in namespace "projected-1217" to be "Succeeded or Failed" May 12 10:20:08.489: INFO: Pod "pod-projected-configmaps-e98da579-1390-41d8-8cfe-c15b5b41ecca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.958199ms May 12 10:20:10.512: INFO: Pod "pod-projected-configmaps-e98da579-1390-41d8-8cfe-c15b5b41ecca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039739904s May 12 10:20:12.597: INFO: Pod "pod-projected-configmaps-e98da579-1390-41d8-8cfe-c15b5b41ecca": Phase="Running", Reason="", readiness=true. Elapsed: 4.124574597s May 12 10:20:14.637: INFO: Pod "pod-projected-configmaps-e98da579-1390-41d8-8cfe-c15b5b41ecca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.164543185s STEP: Saw pod success May 12 10:20:14.637: INFO: Pod "pod-projected-configmaps-e98da579-1390-41d8-8cfe-c15b5b41ecca" satisfied condition "Succeeded or Failed" May 12 10:20:14.645: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-e98da579-1390-41d8-8cfe-c15b5b41ecca container projected-configmap-volume-test: STEP: delete the pod May 12 10:20:15.782: INFO: Waiting for pod pod-projected-configmaps-e98da579-1390-41d8-8cfe-c15b5b41ecca to disappear May 12 10:20:15.785: INFO: Pod pod-projected-configmaps-e98da579-1390-41d8-8cfe-c15b5b41ecca no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:20:15.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1217" for this suite. • [SLOW TEST:7.843 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":8,"skipped":157,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:20:15.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 12 10:20:16.178: INFO: Waiting up to 5m0s for pod "var-expansion-8a51b2bc-d4bd-4d1a-877d-8d6d58a00b3a" in namespace "var-expansion-2673" to be "Succeeded or Failed" May 12 10:20:16.242: INFO: Pod "var-expansion-8a51b2bc-d4bd-4d1a-877d-8d6d58a00b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 63.758105ms May 12 10:20:18.246: INFO: Pod "var-expansion-8a51b2bc-d4bd-4d1a-877d-8d6d58a00b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068396735s May 12 10:20:20.355: INFO: Pod "var-expansion-8a51b2bc-d4bd-4d1a-877d-8d6d58a00b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176825481s May 12 10:20:22.597: INFO: Pod "var-expansion-8a51b2bc-d4bd-4d1a-877d-8d6d58a00b3a": Phase="Running", Reason="", readiness=true. Elapsed: 6.418526351s May 12 10:20:24.610: INFO: Pod "var-expansion-8a51b2bc-d4bd-4d1a-877d-8d6d58a00b3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.431684574s STEP: Saw pod success May 12 10:20:24.610: INFO: Pod "var-expansion-8a51b2bc-d4bd-4d1a-877d-8d6d58a00b3a" satisfied condition "Succeeded or Failed" May 12 10:20:24.612: INFO: Trying to get logs from node latest-worker2 pod var-expansion-8a51b2bc-d4bd-4d1a-877d-8d6d58a00b3a container dapi-container: STEP: delete the pod May 12 10:20:24.723: INFO: Waiting for pod var-expansion-8a51b2bc-d4bd-4d1a-877d-8d6d58a00b3a to disappear May 12 10:20:24.813: INFO: Pod var-expansion-8a51b2bc-d4bd-4d1a-877d-8d6d58a00b3a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:20:24.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2673" for this suite. • [SLOW TEST:8.950 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":9,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:20:24.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:20:25.152: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 12 10:20:27.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9395 create -f -' May 12 10:20:37.905: INFO: stderr: "" May 12 10:20:37.905: INFO: stdout: "e2e-test-crd-publish-openapi-754-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 12 10:20:37.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9395 delete e2e-test-crd-publish-openapi-754-crds test-cr' May 12 10:20:38.278: INFO: stderr: "" May 12 10:20:38.278: INFO: stdout: "e2e-test-crd-publish-openapi-754-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 12 10:20:38.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9395 apply -f -' May 12 10:20:39.225: INFO: stderr: "" May 12 10:20:39.225: INFO: stdout: "e2e-test-crd-publish-openapi-754-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 12 10:20:39.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9395 delete e2e-test-crd-publish-openapi-754-crds test-cr' May 12 10:20:39.338: INFO: stderr: "" May 12 10:20:39.338: INFO: stdout: "e2e-test-crd-publish-openapi-754-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 12 10:20:39.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-754-crds' May 12 10:20:39.585: INFO: stderr: "" May 12 10:20:39.585: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-754-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:20:42.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9395" for this suite. • [SLOW TEST:17.739 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":10,"skipped":185,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:20:42.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 12 10:20:42.603: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 10:20:42.621: INFO: Waiting for terminating namespaces to be deleted... May 12 10:20:42.623: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 12 10:20:42.627: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 12 10:20:42.627: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 12 10:20:42.627: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 12 10:20:42.627: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 12 10:20:42.627: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 12 10:20:42.627: INFO: Container kindnet-cni ready: true, restart count 0 May 12 10:20:42.627: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 12 10:20:42.627: INFO: Container kube-proxy ready: true, restart count 0 May 12 10:20:42.627: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 12 10:20:42.631: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 12 10:20:42.631: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 12 10:20:42.631: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 12 10:20:42.631: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 12 10:20:42.631: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 12 10:20:42.631: INFO: Container kindnet-cni ready: true, restart count 0 May 12 10:20:42.631: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 12 10:20:42.631: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7cd0d37e-e518-496b-abdb-fcdf19f85b2f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7cd0d37e-e518-496b-abdb-fcdf19f85b2f off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7cd0d37e-e518-496b-abdb-fcdf19f85b2f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:20:54.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5973" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:12.013 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":11,"skipped":213,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:20:54.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 12 10:20:55.800: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2114 /api/v1/namespaces/watch-2114/configmaps/e2e-watch-test-configmap-a 9d42f212-6776-4687-b787-52a209af68ee 3774407 0 2020-05-12 10:20:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-12 10:20:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 12 10:20:55.800: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2114 /api/v1/namespaces/watch-2114/configmaps/e2e-watch-test-configmap-a 9d42f212-6776-4687-b787-52a209af68ee 3774407 0 2020-05-12 10:20:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-12 10:20:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 12 10:21:05.822: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2114 /api/v1/namespaces/watch-2114/configmaps/e2e-watch-test-configmap-a 9d42f212-6776-4687-b787-52a209af68ee 3774451 0 2020-05-12 10:20:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-12 10:21:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 12 10:21:05.822: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2114 /api/v1/namespaces/watch-2114/configmaps/e2e-watch-test-configmap-a 9d42f212-6776-4687-b787-52a209af68ee 3774451 0 2020-05-12 10:20:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-12 10:21:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 12 10:21:15.830: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2114 /api/v1/namespaces/watch-2114/configmaps/e2e-watch-test-configmap-a 9d42f212-6776-4687-b787-52a209af68ee 3774480 0 2020-05-12 10:20:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-12 10:21:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 12 10:21:15.830: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2114 /api/v1/namespaces/watch-2114/configmaps/e2e-watch-test-configmap-a 9d42f212-6776-4687-b787-52a209af68ee 3774480 0 2020-05-12 10:20:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-12 10:21:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 12 10:21:25.834: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2114 /api/v1/namespaces/watch-2114/configmaps/e2e-watch-test-configmap-a 9d42f212-6776-4687-b787-52a209af68ee 3774508 0 2020-05-12 10:20:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-12 10:21:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 12 10:21:25.834: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2114 /api/v1/namespaces/watch-2114/configmaps/e2e-watch-test-configmap-a 9d42f212-6776-4687-b787-52a209af68ee 3774508 0 2020-05-12 10:20:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-12 10:21:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 12 10:21:35.840: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2114 /api/v1/namespaces/watch-2114/configmaps/e2e-watch-test-configmap-b f0b12063-0c83-4c13-8001-0e7a03ae5cef 3774536 0 2020-05-12 10:21:35 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-12 10:21:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 12 10:21:35.841: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2114 /api/v1/namespaces/watch-2114/configmaps/e2e-watch-test-configmap-b f0b12063-0c83-4c13-8001-0e7a03ae5cef 3774536 0 2020-05-12 10:21:35 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-12 10:21:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 12 10:21:45.846: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2114 /api/v1/namespaces/watch-2114/configmaps/e2e-watch-test-configmap-b f0b12063-0c83-4c13-8001-0e7a03ae5cef 3774562 0 2020-05-12 10:21:35 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-12 10:21:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 12 10:21:45.846: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2114 /api/v1/namespaces/watch-2114/configmaps/e2e-watch-test-configmap-b f0b12063-0c83-4c13-8001-0e7a03ae5cef 3774562 0 2020-05-12 10:21:35 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-12 10:21:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:21:55.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2114" for this suite. • [SLOW TEST:61.282 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":12,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:21:55.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:21:56.296: INFO: Waiting up to 5m0s for pod "busybox-user-65534-56518095-1f96-4420-a683-4fc9ac25ca20" in namespace "security-context-test-8261" to be "Succeeded or Failed" May 12 10:21:56.346: INFO: Pod "busybox-user-65534-56518095-1f96-4420-a683-4fc9ac25ca20": Phase="Pending", Reason="", readiness=false. Elapsed: 50.408469ms May 12 10:21:58.350: INFO: Pod "busybox-user-65534-56518095-1f96-4420-a683-4fc9ac25ca20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053530052s May 12 10:22:00.354: INFO: Pod "busybox-user-65534-56518095-1f96-4420-a683-4fc9ac25ca20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05769196s May 12 10:22:02.627: INFO: Pod "busybox-user-65534-56518095-1f96-4420-a683-4fc9ac25ca20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.330747025s May 12 10:22:05.555: INFO: Pod "busybox-user-65534-56518095-1f96-4420-a683-4fc9ac25ca20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.25922403s May 12 10:22:05.555: INFO: Pod "busybox-user-65534-56518095-1f96-4420-a683-4fc9ac25ca20" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:22:05.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8261" for this suite. • [SLOW TEST:9.707 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":13,"skipped":234,"failed":0} [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:22:05.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:22:06.216: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:22:11.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2166" for this suite. • [SLOW TEST:5.527 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":14,"skipped":234,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:22:11.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:22:17.787: INFO: Waiting up to 5m0s for pod "client-envvars-7d0321cb-332c-4061-b5ec-2382e55b9268" in namespace "pods-138" to be "Succeeded or Failed" May 12 10:22:17.862: INFO: Pod "client-envvars-7d0321cb-332c-4061-b5ec-2382e55b9268": Phase="Pending", Reason="", readiness=false. Elapsed: 74.481639ms May 12 10:22:19.865: INFO: Pod "client-envvars-7d0321cb-332c-4061-b5ec-2382e55b9268": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077351475s May 12 10:22:21.868: INFO: Pod "client-envvars-7d0321cb-332c-4061-b5ec-2382e55b9268": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080466337s May 12 10:22:23.914: INFO: Pod "client-envvars-7d0321cb-332c-4061-b5ec-2382e55b9268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126920718s STEP: Saw pod success May 12 10:22:23.914: INFO: Pod "client-envvars-7d0321cb-332c-4061-b5ec-2382e55b9268" satisfied condition "Succeeded or Failed" May 12 10:22:23.917: INFO: Trying to get logs from node latest-worker2 pod client-envvars-7d0321cb-332c-4061-b5ec-2382e55b9268 container env3cont: STEP: delete the pod May 12 10:22:24.132: INFO: Waiting for pod client-envvars-7d0321cb-332c-4061-b5ec-2382e55b9268 to disappear May 12 10:22:24.156: INFO: Pod client-envvars-7d0321cb-332c-4061-b5ec-2382e55b9268 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:22:24.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-138" for this suite. • [SLOW TEST:13.075 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":15,"skipped":244,"failed":0} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:22:24.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 12 10:22:24.487: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:22:39.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7434" for this suite. • [SLOW TEST:16.195 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":16,"skipped":244,"failed":0} S ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:22:40.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 12 10:22:40.952: INFO: Created pod &Pod{ObjectMeta:{dns-7604 dns-7604 /api/v1/namespaces/dns-7604/pods/dns-7604 1e2282b6-4fbd-4cf3-ad94-e373585ff996 3774838 0 2020-05-12 10:22:40 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-12 10:22:40 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4bjcb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4bjcb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4bjcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 10:22:41.288: INFO: The status of Pod dns-7604 is Pending, waiting for it to be Running (with Ready = true) May 12 10:22:43.290: INFO: The status of Pod dns-7604 is Pending, waiting for it to be Running (with Ready = true) May 12 10:22:45.358: INFO: The status of Pod dns-7604 is Pending, waiting for it to be Running (with Ready = true) May 12 10:22:47.873: INFO: The status of Pod dns-7604 is Pending, waiting for it to be Running (with Ready = true) May 12 10:22:49.747: INFO: The status of Pod dns-7604 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 12 10:22:49.747: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7604 PodName:dns-7604 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:22:49.747: INFO: >>> kubeConfig: /root/.kube/config I0512 10:22:50.165419 7 log.go:172] (0xc00092c000) (0xc001c1c780) Create stream I0512 10:22:50.165452 7 log.go:172] (0xc00092c000) (0xc001c1c780) Stream added, broadcasting: 1 I0512 10:22:50.166999 7 log.go:172] (0xc00092c000) Reply frame received for 1 I0512 10:22:50.167028 7 log.go:172] (0xc00092c000) (0xc00209a6e0) Create stream I0512 10:22:50.167036 7 log.go:172] (0xc00092c000) (0xc00209a6e0) Stream added, broadcasting: 3 I0512 10:22:50.167861 7 log.go:172] (0xc00092c000) Reply frame received for 3 I0512 10:22:50.167909 7 log.go:172] (0xc00092c000) (0xc00213a960) Create stream I0512 10:22:50.167924 7 log.go:172] (0xc00092c000) (0xc00213a960) Stream added, broadcasting: 5 I0512 10:22:50.168690 7 log.go:172] (0xc00092c000) Reply frame received for 5 I0512 10:22:50.251928 7 log.go:172] (0xc00092c000) Data frame received for 3 I0512 10:22:50.251956 7 log.go:172] (0xc00209a6e0) (3) Data frame handling I0512 10:22:50.251973 7 log.go:172] (0xc00209a6e0) (3) Data frame sent I0512 10:22:50.255161 7 log.go:172] (0xc00092c000) Data frame received for 3 I0512 10:22:50.255225 7 log.go:172] (0xc00209a6e0) (3) Data frame handling I0512 10:22:50.255317 7 log.go:172] (0xc00092c000) Data frame received for 5 I0512 10:22:50.255335 7 log.go:172] (0xc00213a960) (5) Data frame handling I0512 10:22:50.256909 7 log.go:172] (0xc00092c000) Data frame received for 1 I0512 10:22:50.256950 7 log.go:172] (0xc001c1c780) (1) Data frame handling I0512 10:22:50.256972 7 log.go:172] (0xc001c1c780) (1) Data frame sent I0512 10:22:50.256985 7 log.go:172] (0xc00092c000) (0xc001c1c780) Stream removed, broadcasting: 1 I0512 10:22:50.257052 7 log.go:172] (0xc00092c000) Go away received I0512 10:22:50.257409 7 log.go:172] (0xc00092c000) (0xc001c1c780) Stream removed, broadcasting: 1 I0512 10:22:50.257422 7 log.go:172] (0xc00092c000) (0xc00209a6e0) Stream removed, broadcasting: 3 I0512 10:22:50.257428 7 log.go:172] (0xc00092c000) (0xc00213a960) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 12 10:22:50.257: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7604 PodName:dns-7604 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:22:50.257: INFO: >>> kubeConfig: /root/.kube/config I0512 10:22:50.545654 7 log.go:172] (0xc00092c840) (0xc001c1cdc0) Create stream I0512 10:22:50.545705 7 log.go:172] (0xc00092c840) (0xc001c1cdc0) Stream added, broadcasting: 1 I0512 10:22:50.547588 7 log.go:172] (0xc00092c840) Reply frame received for 1 I0512 10:22:50.547649 7 log.go:172] (0xc00092c840) (0xc00209a780) Create stream I0512 10:22:50.547671 7 log.go:172] (0xc00092c840) (0xc00209a780) Stream added, broadcasting: 3 I0512 10:22:50.548721 7 log.go:172] (0xc00092c840) Reply frame received for 3 I0512 10:22:50.548759 7 log.go:172] (0xc00092c840) (0xc001cc2000) Create stream I0512 10:22:50.548773 7 log.go:172] (0xc00092c840) (0xc001cc2000) Stream added, broadcasting: 5 I0512 10:22:50.549897 7 log.go:172] (0xc00092c840) Reply frame received for 5 I0512 10:22:50.615820 7 log.go:172] (0xc00092c840) Data frame received for 3 I0512 10:22:50.615852 7 log.go:172] (0xc00209a780) (3) Data frame handling I0512 10:22:50.615876 7 log.go:172] (0xc00209a780) (3) Data frame sent I0512 10:22:50.616707 7 log.go:172] (0xc00092c840) Data frame received for 5 I0512 10:22:50.616739 7 log.go:172] (0xc001cc2000) (5) Data frame handling I0512 10:22:50.616856 7 log.go:172] (0xc00092c840) Data frame received for 3 I0512 10:22:50.616892 7 log.go:172] (0xc00209a780) (3) Data frame handling I0512 10:22:50.619019 7 log.go:172] (0xc00092c840) Data frame received for 1 I0512 10:22:50.619041 7 log.go:172] (0xc001c1cdc0) (1) Data frame handling I0512 10:22:50.619067 7 log.go:172] (0xc001c1cdc0) (1) Data frame sent I0512 10:22:50.619083 7 log.go:172] (0xc00092c840) (0xc001c1cdc0) Stream removed, broadcasting: 1 I0512 10:22:50.619191 7 log.go:172] (0xc00092c840) (0xc001c1cdc0) Stream removed, broadcasting: 1 I0512 10:22:50.619215 7 log.go:172] (0xc00092c840) (0xc00209a780) Stream removed, broadcasting: 3 I0512 10:22:50.619232 7 log.go:172] (0xc00092c840) (0xc001cc2000) Stream removed, broadcasting: 5 May 12 10:22:50.619: INFO: Deleting pod dns-7604... I0512 10:22:50.619283 7 log.go:172] (0xc00092c840) Go away received [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:22:51.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7604" for this suite. • [SLOW TEST:12.007 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":17,"skipped":245,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:22:52.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 10:22:55.077: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 10:22:57.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875775, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875775, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875775, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875774, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:22:59.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875775, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875775, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875775, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875774, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:23:01.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875775, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875775, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875775, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875774, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:23:03.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875775, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875775, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875775, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724875774, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 10:23:07.146: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:23:07.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:23:08.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4515" for this suite. STEP: Destroying namespace "webhook-4515-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.951 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":18,"skipped":256,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:23:09.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-03cd0fe5-b092-4aaf-9d53-fae3418dd481 STEP: Creating a pod to test consume secrets May 12 10:23:10.220: INFO: Waiting up to 5m0s for pod "pod-secrets-509a048c-e5f7-4615-b2a8-a65ea77a9d09" in namespace "secrets-7695" to be "Succeeded or Failed" May 12 10:23:10.223: INFO: Pod "pod-secrets-509a048c-e5f7-4615-b2a8-a65ea77a9d09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.973277ms May 12 10:23:12.227: INFO: Pod "pod-secrets-509a048c-e5f7-4615-b2a8-a65ea77a9d09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006444117s May 12 10:23:14.370: INFO: Pod "pod-secrets-509a048c-e5f7-4615-b2a8-a65ea77a9d09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149686808s May 12 10:23:16.544: INFO: Pod "pod-secrets-509a048c-e5f7-4615-b2a8-a65ea77a9d09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.323506853s STEP: Saw pod success May 12 10:23:16.544: INFO: Pod "pod-secrets-509a048c-e5f7-4615-b2a8-a65ea77a9d09" satisfied condition "Succeeded or Failed" May 12 10:23:16.546: INFO: Trying to get logs from node latest-worker pod pod-secrets-509a048c-e5f7-4615-b2a8-a65ea77a9d09 container secret-volume-test: STEP: delete the pod May 12 10:23:16.630: INFO: Waiting for pod pod-secrets-509a048c-e5f7-4615-b2a8-a65ea77a9d09 to disappear May 12 10:23:16.723: INFO: Pod pod-secrets-509a048c-e5f7-4615-b2a8-a65ea77a9d09 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:23:16.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7695" for this suite. • [SLOW TEST:7.837 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":19,"skipped":271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:23:17.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2155.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2155.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2155.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 10:23:27.612: INFO: DNS probes using dns-2155/dns-test-fe0da456-c265-4ef9-af19-a2c7fb6dbd23 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:23:28.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2155" for this suite. • [SLOW TEST:11.826 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":20,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:23:28.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:23:29.160: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 12 10:23:32.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-274 create -f -' May 12 10:23:42.683: INFO: stderr: "" May 12 10:23:42.683: INFO: stdout: "e2e-test-crd-publish-openapi-1294-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 12 10:23:42.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-274 delete e2e-test-crd-publish-openapi-1294-crds test-cr' May 12 10:23:42.797: INFO: stderr: "" May 12 10:23:42.797: INFO: stdout: "e2e-test-crd-publish-openapi-1294-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 12 10:23:42.797: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-274 apply -f -' May 12 10:23:43.084: INFO: stderr: "" May 12 10:23:43.084: INFO: stdout: "e2e-test-crd-publish-openapi-1294-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 12 10:23:43.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-274 delete e2e-test-crd-publish-openapi-1294-crds test-cr' May 12 10:23:43.192: INFO: stderr: "" May 12 10:23:43.192: INFO: stdout: "e2e-test-crd-publish-openapi-1294-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 12 10:23:43.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1294-crds' May 12 10:23:43.491: INFO: stderr: "" May 12 10:23:43.491: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1294-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:23:46.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-274" for this suite. • [SLOW TEST:17.649 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":21,"skipped":343,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:23:46.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-0d1ee0f6-a66e-43b1-a7a7-1e3b1b7ca000 STEP: Creating a pod to test consume configMaps May 12 10:23:47.113: INFO: Waiting up to 5m0s for pod "pod-configmaps-b51121ba-d339-4b2f-a57f-b9250606af8b" in namespace "configmap-8643" to be "Succeeded or Failed" May 12 10:23:47.154: INFO: Pod "pod-configmaps-b51121ba-d339-4b2f-a57f-b9250606af8b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.884139ms May 12 10:23:49.184: INFO: Pod "pod-configmaps-b51121ba-d339-4b2f-a57f-b9250606af8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071413426s May 12 10:23:51.328: INFO: Pod "pod-configmaps-b51121ba-d339-4b2f-a57f-b9250606af8b": Phase="Running", Reason="", readiness=true. Elapsed: 4.215292833s May 12 10:23:53.331: INFO: Pod "pod-configmaps-b51121ba-d339-4b2f-a57f-b9250606af8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.217909264s STEP: Saw pod success May 12 10:23:53.331: INFO: Pod "pod-configmaps-b51121ba-d339-4b2f-a57f-b9250606af8b" satisfied condition "Succeeded or Failed" May 12 10:23:53.333: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-b51121ba-d339-4b2f-a57f-b9250606af8b container configmap-volume-test: STEP: delete the pod May 12 10:23:53.514: INFO: Waiting for pod pod-configmaps-b51121ba-d339-4b2f-a57f-b9250606af8b to disappear May 12 10:23:53.609: INFO: Pod pod-configmaps-b51121ba-d339-4b2f-a57f-b9250606af8b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:23:53.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8643" for this suite. • [SLOW TEST:7.384 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":22,"skipped":351,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:23:54.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 12 10:24:00.704: INFO: Pod pod-hostip-76395a32-0787-48a6-bc87-3598f15370dd has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:24:00.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8487" for this suite. • [SLOW TEST:6.692 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":23,"skipped":372,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:24:00.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 12 10:24:09.954: INFO: Successfully updated pod "adopt-release-dkgs8" STEP: Checking that the Job readopts the Pod May 12 10:24:09.955: INFO: Waiting up to 15m0s for pod "adopt-release-dkgs8" in namespace "job-8014" to be "adopted" May 12 10:24:10.004: INFO: Pod "adopt-release-dkgs8": Phase="Running", Reason="", readiness=true. Elapsed: 49.207129ms May 12 10:24:12.007: INFO: Pod "adopt-release-dkgs8": Phase="Running", Reason="", readiness=true. Elapsed: 2.052919548s May 12 10:24:12.008: INFO: Pod "adopt-release-dkgs8" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 12 10:24:12.537: INFO: Successfully updated pod "adopt-release-dkgs8" STEP: Checking that the Job releases the Pod May 12 10:24:12.538: INFO: Waiting up to 15m0s for pod "adopt-release-dkgs8" in namespace "job-8014" to be "released" May 12 10:24:12.596: INFO: Pod "adopt-release-dkgs8": Phase="Running", Reason="", readiness=true. Elapsed: 58.499391ms May 12 10:24:14.600: INFO: Pod "adopt-release-dkgs8": Phase="Running", Reason="", readiness=true. Elapsed: 2.062891434s May 12 10:24:14.600: INFO: Pod "adopt-release-dkgs8" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:24:14.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8014" for this suite. • [SLOW TEST:13.900 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":24,"skipped":399,"failed":0} SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:24:14.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 12 10:24:15.539: INFO: created pod pod-service-account-defaultsa May 12 10:24:15.540: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 12 10:24:15.616: INFO: created pod pod-service-account-mountsa May 12 10:24:15.616: INFO: pod pod-service-account-mountsa service account token volume mount: true May 12 10:24:15.643: INFO: created pod pod-service-account-nomountsa May 12 10:24:15.643: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 12 10:24:15.804: INFO: created pod pod-service-account-defaultsa-mountspec May 12 10:24:15.804: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 12 10:24:15.830: INFO: created pod pod-service-account-mountsa-mountspec May 12 10:24:15.830: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 12 10:24:15.995: INFO: created pod pod-service-account-nomountsa-mountspec May 12 10:24:15.995: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 12 10:24:16.010: INFO: created pod pod-service-account-defaultsa-nomountspec May 12 10:24:16.010: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 12 10:24:16.090: INFO: created pod pod-service-account-mountsa-nomountspec May 12 10:24:16.090: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 12 10:24:16.148: INFO: created pod pod-service-account-nomountsa-nomountspec May 12 10:24:16.148: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:24:16.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4522" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":25,"skipped":408,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:24:16.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:24:38.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-960" for this suite. • [SLOW TEST:22.460 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":26,"skipped":423,"failed":0} S ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:24:38.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 12 10:24:48.005: INFO: Successfully updated pod "labelsupdateac79f619-5183-4dd5-8dbc-4400b477a7fd" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:24:50.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8329" for this suite. • [SLOW TEST:12.119 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":27,"skipped":424,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:24:50.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6245 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-6245 I0512 10:24:51.664546 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6245, replica count: 2 I0512 10:24:54.714922 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:24:57.715174 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:25:00.715375 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:25:03.715568 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 10:25:03.715: INFO: Creating new exec pod May 12 10:25:11.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6245 execpodgpzqg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 12 10:25:12.236: INFO: stderr: "I0512 10:25:12.156429 420 log.go:172] (0xc00065c0b0) (0xc0006597c0) Create stream\nI0512 10:25:12.156471 420 log.go:172] (0xc00065c0b0) (0xc0006597c0) Stream added, broadcasting: 1\nI0512 10:25:12.158886 420 log.go:172] (0xc00065c0b0) Reply frame received for 1\nI0512 10:25:12.158937 420 log.go:172] (0xc00065c0b0) (0xc00054e500) Create stream\nI0512 10:25:12.158947 420 log.go:172] (0xc00065c0b0) (0xc00054e500) Stream added, broadcasting: 3\nI0512 10:25:12.159640 420 log.go:172] (0xc00065c0b0) Reply frame received for 3\nI0512 10:25:12.159667 420 log.go:172] (0xc00065c0b0) (0xc000519040) Create stream\nI0512 10:25:12.159678 420 log.go:172] (0xc00065c0b0) (0xc000519040) Stream added, broadcasting: 5\nI0512 10:25:12.160281 420 log.go:172] (0xc00065c0b0) Reply frame received for 5\nI0512 10:25:12.228341 420 log.go:172] (0xc00065c0b0) Data frame received for 5\nI0512 10:25:12.228470 420 log.go:172] (0xc000519040) (5) Data frame handling\nI0512 10:25:12.228516 420 log.go:172] (0xc000519040) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0512 10:25:12.228649 420 log.go:172] (0xc00065c0b0) Data frame received for 5\nI0512 10:25:12.228657 420 log.go:172] (0xc000519040) (5) Data frame handling\nI0512 10:25:12.228662 420 log.go:172] (0xc000519040) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0512 10:25:12.229042 420 log.go:172] (0xc00065c0b0) Data frame received for 3\nI0512 10:25:12.229065 420 log.go:172] (0xc00054e500) (3) Data frame handling\nI0512 10:25:12.229090 420 log.go:172] (0xc00065c0b0) Data frame received for 5\nI0512 10:25:12.229106 420 log.go:172] (0xc000519040) (5) Data frame handling\nI0512 10:25:12.231823 420 log.go:172] (0xc00065c0b0) Data frame received for 1\nI0512 10:25:12.231839 420 log.go:172] (0xc0006597c0) (1) Data frame handling\nI0512 10:25:12.231866 420 log.go:172] (0xc0006597c0) (1) Data frame sent\nI0512 10:25:12.231878 420 log.go:172] (0xc00065c0b0) (0xc0006597c0) Stream removed, broadcasting: 1\nI0512 10:25:12.231938 420 log.go:172] (0xc00065c0b0) Go away received\nI0512 10:25:12.232336 420 log.go:172] (0xc00065c0b0) (0xc0006597c0) Stream removed, broadcasting: 1\nI0512 10:25:12.232350 420 log.go:172] (0xc00065c0b0) (0xc00054e500) Stream removed, broadcasting: 3\nI0512 10:25:12.232358 420 log.go:172] (0xc00065c0b0) (0xc000519040) Stream removed, broadcasting: 5\n" May 12 10:25:12.236: INFO: stdout: "" May 12 10:25:12.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6245 execpodgpzqg -- /bin/sh -x -c nc -zv -t -w 2 10.99.33.72 80' May 12 10:25:12.961: INFO: stderr: "I0512 10:25:12.893192 441 log.go:172] (0xc00003a580) (0xc000530460) Create stream\nI0512 10:25:12.893243 441 log.go:172] (0xc00003a580) (0xc000530460) Stream added, broadcasting: 1\nI0512 10:25:12.894855 441 log.go:172] (0xc00003a580) Reply frame received for 1\nI0512 10:25:12.894896 441 log.go:172] (0xc00003a580) (0xc0005060a0) Create stream\nI0512 10:25:12.894913 441 log.go:172] (0xc00003a580) (0xc0005060a0) Stream added, broadcasting: 3\nI0512 10:25:12.895530 441 log.go:172] (0xc00003a580) Reply frame received for 3\nI0512 10:25:12.895561 441 log.go:172] (0xc00003a580) (0xc0004fcc80) Create stream\nI0512 10:25:12.895573 441 log.go:172] (0xc00003a580) (0xc0004fcc80) Stream added, broadcasting: 5\nI0512 10:25:12.896179 441 log.go:172] (0xc00003a580) Reply frame received for 5\nI0512 10:25:12.954450 441 log.go:172] (0xc00003a580) Data frame received for 3\nI0512 10:25:12.954598 441 log.go:172] (0xc0005060a0) (3) Data frame handling\nI0512 10:25:12.954675 441 log.go:172] (0xc00003a580) Data frame received for 5\nI0512 10:25:12.954716 441 log.go:172] (0xc0004fcc80) (5) Data frame handling\nI0512 10:25:12.954746 441 log.go:172] (0xc0004fcc80) (5) Data frame sent\nI0512 10:25:12.954765 441 log.go:172] (0xc00003a580) Data frame received for 5\nI0512 10:25:12.954777 441 log.go:172] (0xc0004fcc80) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.33.72 80\nConnection to 10.99.33.72 80 port [tcp/http] succeeded!\nI0512 10:25:12.955820 441 log.go:172] (0xc00003a580) Data frame received for 1\nI0512 10:25:12.955830 441 log.go:172] (0xc000530460) (1) Data frame handling\nI0512 10:25:12.955836 441 log.go:172] (0xc000530460) (1) Data frame sent\nI0512 10:25:12.955842 441 log.go:172] (0xc00003a580) (0xc000530460) Stream removed, broadcasting: 1\nI0512 10:25:12.955850 441 log.go:172] (0xc00003a580) Go away received\nI0512 10:25:12.956270 441 log.go:172] (0xc00003a580) (0xc000530460) Stream removed, broadcasting: 1\nI0512 10:25:12.956303 441 log.go:172] (0xc00003a580) (0xc0005060a0) Stream removed, broadcasting: 3\nI0512 10:25:12.956323 441 log.go:172] (0xc00003a580) (0xc0004fcc80) Stream removed, broadcasting: 5\n" May 12 10:25:12.961: INFO: stdout: "" May 12 10:25:12.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6245 execpodgpzqg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32033' May 12 10:25:13.179: INFO: stderr: "I0512 10:25:13.123794 461 log.go:172] (0xc0006f4210) (0xc00052e820) Create stream\nI0512 10:25:13.123831 461 log.go:172] (0xc0006f4210) (0xc00052e820) Stream added, broadcasting: 1\nI0512 10:25:13.125565 461 log.go:172] (0xc0006f4210) Reply frame received for 1\nI0512 10:25:13.125592 461 log.go:172] (0xc0006f4210) (0xc0000dda40) Create stream\nI0512 10:25:13.125603 461 log.go:172] (0xc0006f4210) (0xc0000dda40) Stream added, broadcasting: 3\nI0512 10:25:13.126357 461 log.go:172] (0xc0006f4210) Reply frame received for 3\nI0512 10:25:13.126386 461 log.go:172] (0xc0006f4210) (0xc00023db80) Create stream\nI0512 10:25:13.126394 461 log.go:172] (0xc0006f4210) (0xc00023db80) Stream added, broadcasting: 5\nI0512 10:25:13.127079 461 log.go:172] (0xc0006f4210) Reply frame received for 5\nI0512 10:25:13.173105 461 log.go:172] (0xc0006f4210) Data frame received for 3\nI0512 10:25:13.173418 461 log.go:172] (0xc0000dda40) (3) Data frame handling\nI0512 10:25:13.173446 461 log.go:172] (0xc0006f4210) Data frame received for 5\nI0512 10:25:13.173454 461 log.go:172] (0xc00023db80) (5) Data frame handling\nI0512 10:25:13.173463 461 log.go:172] (0xc00023db80) (5) Data frame sent\nI0512 10:25:13.173474 461 log.go:172] (0xc0006f4210) Data frame received for 5\nI0512 10:25:13.173482 461 log.go:172] (0xc00023db80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32033\nConnection to 172.17.0.13 32033 port [tcp/32033] succeeded!\nI0512 10:25:13.175038 461 log.go:172] (0xc0006f4210) Data frame received for 1\nI0512 10:25:13.175064 461 log.go:172] (0xc00052e820) (1) Data frame handling\nI0512 10:25:13.175086 461 log.go:172] (0xc00052e820) (1) Data frame sent\nI0512 10:25:13.175103 461 log.go:172] (0xc0006f4210) (0xc00052e820) Stream removed, broadcasting: 1\nI0512 10:25:13.175117 461 log.go:172] (0xc0006f4210) Go away received\nI0512 10:25:13.175418 461 log.go:172] (0xc0006f4210) (0xc00052e820) Stream removed, broadcasting: 1\nI0512 10:25:13.175441 461 log.go:172] (0xc0006f4210) (0xc0000dda40) Stream removed, broadcasting: 3\nI0512 10:25:13.175456 461 log.go:172] (0xc0006f4210) (0xc00023db80) Stream removed, broadcasting: 5\n" May 12 10:25:13.179: INFO: stdout: "" May 12 10:25:13.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6245 execpodgpzqg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32033' May 12 10:25:13.357: INFO: stderr: "I0512 10:25:13.299510 482 log.go:172] (0xc000ae6000) (0xc0008e8be0) Create stream\nI0512 10:25:13.299553 482 log.go:172] (0xc000ae6000) (0xc0008e8be0) Stream added, broadcasting: 1\nI0512 10:25:13.300820 482 log.go:172] (0xc000ae6000) Reply frame received for 1\nI0512 10:25:13.300841 482 log.go:172] (0xc000ae6000) (0xc0008e9b80) Create stream\nI0512 10:25:13.300848 482 log.go:172] (0xc000ae6000) (0xc0008e9b80) Stream added, broadcasting: 3\nI0512 10:25:13.301963 482 log.go:172] (0xc000ae6000) Reply frame received for 3\nI0512 10:25:13.301997 482 log.go:172] (0xc000ae6000) (0xc0008de460) Create stream\nI0512 10:25:13.302010 482 log.go:172] (0xc000ae6000) (0xc0008de460) Stream added, broadcasting: 5\nI0512 10:25:13.302684 482 log.go:172] (0xc000ae6000) Reply frame received for 5\nI0512 10:25:13.352241 482 log.go:172] (0xc000ae6000) Data frame received for 5\nI0512 10:25:13.352270 482 log.go:172] (0xc0008de460) (5) Data frame handling\nI0512 10:25:13.352300 482 log.go:172] (0xc0008de460) (5) Data frame sent\nI0512 10:25:13.352313 482 log.go:172] (0xc000ae6000) Data frame received for 5\nI0512 10:25:13.352324 482 log.go:172] (0xc0008de460) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32033\nConnection to 172.17.0.12 32033 port [tcp/32033] succeeded!\nI0512 10:25:13.352348 482 log.go:172] (0xc000ae6000) Data frame received for 3\nI0512 10:25:13.352382 482 log.go:172] (0xc0008e9b80) (3) Data frame handling\nI0512 10:25:13.352449 482 log.go:172] (0xc0008de460) (5) Data frame sent\nI0512 10:25:13.352567 482 log.go:172] (0xc000ae6000) Data frame received for 5\nI0512 10:25:13.352594 482 log.go:172] (0xc0008de460) (5) Data frame handling\nI0512 10:25:13.353935 482 log.go:172] (0xc000ae6000) Data frame received for 1\nI0512 10:25:13.353952 482 log.go:172] (0xc0008e8be0) (1) Data frame handling\nI0512 10:25:13.353974 482 log.go:172] (0xc0008e8be0) (1) Data frame sent\nI0512 10:25:13.353990 482 log.go:172] (0xc000ae6000) (0xc0008e8be0) Stream removed, broadcasting: 1\nI0512 10:25:13.354001 482 log.go:172] (0xc000ae6000) Go away received\nI0512 10:25:13.354246 482 log.go:172] (0xc000ae6000) (0xc0008e8be0) Stream removed, broadcasting: 1\nI0512 10:25:13.354269 482 log.go:172] (0xc000ae6000) (0xc0008e9b80) Stream removed, broadcasting: 3\nI0512 10:25:13.354280 482 log.go:172] (0xc000ae6000) (0xc0008de460) Stream removed, broadcasting: 5\n" May 12 10:25:13.357: INFO: stdout: "" May 12 10:25:13.357: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:25:14.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6245" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:23.957 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":28,"skipped":427,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:25:14.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:25:15.957: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Pending, waiting for it to be Running (with Ready = true) May 12 10:25:17.962: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Pending, waiting for it to be Running (with Ready = true) May 12 10:25:20.359: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Pending, waiting for it to be Running (with Ready = true) May 12 10:25:22.113: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Running (Ready = false) May 12 10:25:24.293: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Running (Ready = false) May 12 10:25:25.961: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Running (Ready = false) May 12 10:25:27.960: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Running (Ready = false) May 12 10:25:29.960: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Running (Ready = false) May 12 10:25:32.065: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Running (Ready = false) May 12 10:25:34.126: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Running (Ready = false) May 12 10:25:35.962: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Running (Ready = false) May 12 10:25:37.961: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Running (Ready = false) May 12 10:25:39.960: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Running (Ready = false) May 12 10:25:42.029: INFO: The status of Pod test-webserver-16408d84-46bf-4e9a-b1d8-22a428b8cf31 is Running (Ready = true) May 12 10:25:42.031: INFO: Container started at 2020-05-12 10:25:19 +0000 UTC, pod became ready at 2020-05-12 10:25:41 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:25:42.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2639" for this suite. • [SLOW TEST:27.204 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":29,"skipped":447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:25:42.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:25:52.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5299" for this suite. • [SLOW TEST:11.241 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":30,"skipped":496,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:25:53.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0512 10:26:35.260874 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 10:26:35.260: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:26:35.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2970" for this suite. • [SLOW TEST:41.985 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":31,"skipped":497,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:26:35.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 10:26:35.590: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a1165d6-4844-41eb-a48c-616baf4fe861" in namespace "projected-8933" to be "Succeeded or Failed" May 12 10:26:35.594: INFO: Pod "downwardapi-volume-7a1165d6-4844-41eb-a48c-616baf4fe861": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12528ms May 12 10:26:37.598: INFO: Pod "downwardapi-volume-7a1165d6-4844-41eb-a48c-616baf4fe861": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008002894s May 12 10:26:39.602: INFO: Pod "downwardapi-volume-7a1165d6-4844-41eb-a48c-616baf4fe861": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01238759s May 12 10:26:41.635: INFO: Pod "downwardapi-volume-7a1165d6-4844-41eb-a48c-616baf4fe861": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045136331s STEP: Saw pod success May 12 10:26:41.635: INFO: Pod "downwardapi-volume-7a1165d6-4844-41eb-a48c-616baf4fe861" satisfied condition "Succeeded or Failed" May 12 10:26:41.654: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7a1165d6-4844-41eb-a48c-616baf4fe861 container client-container: STEP: delete the pod May 12 10:26:41.695: INFO: Waiting for pod downwardapi-volume-7a1165d6-4844-41eb-a48c-616baf4fe861 to disappear May 12 10:26:41.707: INFO: Pod downwardapi-volume-7a1165d6-4844-41eb-a48c-616baf4fe861 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:26:41.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8933" for this suite. • [SLOW TEST:6.448 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":32,"skipped":499,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:26:41.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 12 10:26:43.033: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 12 10:26:45.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876003, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876003, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876003, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876002, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:26:47.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876003, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876003, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876003, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876002, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:26:49.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876003, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876003, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876003, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876002, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 10:26:52.773: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:26:52.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:26:54.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7882" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:12.593 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":33,"skipped":509,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:26:54.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:26:54.530: INFO: Create a RollingUpdate DaemonSet May 12 10:26:54.551: INFO: Check that daemon pods launch on every node of the cluster May 12 10:26:54.567: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:26:54.589: INFO: Number of nodes with available pods: 0 May 12 10:26:54.589: INFO: Node latest-worker is running more than one daemon pod May 12 10:26:55.976: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:26:55.978: INFO: Number of nodes with available pods: 0 May 12 10:26:55.978: INFO: Node latest-worker is running more than one daemon pod May 12 10:26:56.635: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:26:56.637: INFO: Number of nodes with available pods: 0 May 12 10:26:56.637: INFO: Node latest-worker is running more than one daemon pod May 12 10:26:57.753: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:26:57.757: INFO: Number of nodes with available pods: 0 May 12 10:26:57.757: INFO: Node latest-worker is running more than one daemon pod May 12 10:26:58.627: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:26:58.638: INFO: Number of nodes with available pods: 0 May 12 10:26:58.638: INFO: Node latest-worker is running more than one daemon pod May 12 10:26:59.636: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:26:59.639: INFO: Number of nodes with available pods: 2 May 12 10:26:59.639: INFO: Number of running nodes: 2, number of available pods: 2 May 12 10:26:59.639: INFO: Update the DaemonSet to trigger a rollout May 12 10:26:59.814: INFO: Updating DaemonSet daemon-set May 12 10:27:05.817: INFO: Roll back the DaemonSet before rollout is complete May 12 10:27:05.880: INFO: Updating DaemonSet daemon-set May 12 10:27:05.880: INFO: Make sure DaemonSet rollback is complete May 12 10:27:06.042: INFO: Wrong image for pod: daemon-set-vscsl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 10:27:06.042: INFO: Pod daemon-set-vscsl is not available May 12 10:27:06.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:27:07.258: INFO: Wrong image for pod: daemon-set-vscsl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 10:27:07.258: INFO: Pod daemon-set-vscsl is not available May 12 10:27:07.261: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:27:08.217: INFO: Wrong image for pod: daemon-set-vscsl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 10:27:08.217: INFO: Pod daemon-set-vscsl is not available May 12 10:27:08.220: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:27:09.182: INFO: Wrong image for pod: daemon-set-vscsl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 10:27:09.182: INFO: Pod daemon-set-vscsl is not available May 12 10:27:09.348: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:27:10.150: INFO: Pod daemon-set-ct2dq is not available May 12 10:27:10.153: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6748, will wait for the garbage collector to delete the pods May 12 10:27:10.388: INFO: Deleting DaemonSet.extensions daemon-set took: 162.100412ms May 12 10:27:11.288: INFO: Terminating DaemonSet.extensions daemon-set pods took: 900.258228ms May 12 10:27:15.192: INFO: Number of nodes with available pods: 0 May 12 10:27:15.192: INFO: Number of running nodes: 0, number of available pods: 0 May 12 10:27:15.198: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6748/daemonsets","resourceVersion":"3776524"},"items":null} May 12 10:27:15.201: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6748/pods","resourceVersion":"3776524"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:27:15.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6748" for this suite. • [SLOW TEST:20.910 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":34,"skipped":515,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:27:15.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:27:23.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-538" for this suite. STEP: Destroying namespace "nsdeletetest-596" for this suite. May 12 10:27:23.298: INFO: Namespace nsdeletetest-596 was already deleted STEP: Destroying namespace "nsdeletetest-8967" for this suite. • [SLOW TEST:8.082 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":35,"skipped":516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:27:23.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:27:23.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 12 10:27:23.481: INFO: stderr: "" May 12 10:27:23.481: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:27:23.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7792" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":36,"skipped":583,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:27:23.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 12 10:27:23.602: INFO: namespace kubectl-3750 May 12 10:27:23.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3750' May 12 10:27:23.964: INFO: stderr: "" May 12 10:27:23.964: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 12 10:27:24.994: INFO: Selector matched 1 pods for map[app:agnhost] May 12 10:27:24.994: INFO: Found 0 / 1 May 12 10:27:25.968: INFO: Selector matched 1 pods for map[app:agnhost] May 12 10:27:25.968: INFO: Found 0 / 1 May 12 10:27:27.010: INFO: Selector matched 1 pods for map[app:agnhost] May 12 10:27:27.010: INFO: Found 0 / 1 May 12 10:27:27.969: INFO: Selector matched 1 pods for map[app:agnhost] May 12 10:27:27.969: INFO: Found 0 / 1 May 12 10:27:28.976: INFO: Selector matched 1 pods for map[app:agnhost] May 12 10:27:28.976: INFO: Found 1 / 1 May 12 10:27:28.976: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 10:27:28.978: INFO: Selector matched 1 pods for map[app:agnhost] May 12 10:27:28.978: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 10:27:28.978: INFO: wait on agnhost-master startup in kubectl-3750 May 12 10:27:28.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-8sclh agnhost-master --namespace=kubectl-3750' May 12 10:27:29.119: INFO: stderr: "" May 12 10:27:29.119: INFO: stdout: "Paused\n" STEP: exposing RC May 12 10:27:29.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3750' May 12 10:27:29.257: INFO: stderr: "" May 12 10:27:29.257: INFO: stdout: "service/rm2 exposed\n" May 12 10:27:29.268: INFO: Service rm2 in namespace kubectl-3750 found. STEP: exposing service May 12 10:27:31.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3750' May 12 10:27:31.524: INFO: stderr: "" May 12 10:27:31.524: INFO: stdout: "service/rm3 exposed\n" May 12 10:27:31.531: INFO: Service rm3 in namespace kubectl-3750 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:27:33.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3750" for this suite. • [SLOW TEST:10.061 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":37,"skipped":589,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:27:33.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 12 10:27:44.064: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 10:27:44.114: INFO: Pod pod-with-poststart-http-hook still exists May 12 10:27:46.114: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 10:27:46.150: INFO: Pod pod-with-poststart-http-hook still exists May 12 10:27:48.114: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 10:27:48.156: INFO: Pod pod-with-poststart-http-hook still exists May 12 10:27:50.114: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 10:27:50.118: INFO: Pod pod-with-poststart-http-hook still exists May 12 10:27:52.114: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 10:27:52.132: INFO: Pod pod-with-poststart-http-hook still exists May 12 10:27:54.114: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 10:27:54.118: INFO: Pod pod-with-poststart-http-hook still exists May 12 10:27:56.114: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 10:27:56.120: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:27:56.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8906" for this suite. • [SLOW TEST:22.578 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":38,"skipped":592,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:27:56.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3816 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 10:27:56.549: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 12 10:27:56.731: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:27:58.886: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:28:01.043: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:28:03.312: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:28:04.743: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:28:06.950: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:28:08.736: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:28:10.736: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:28:13.060: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:28:15.157: INFO: The status of Pod netserver-0 is Running (Ready = true) May 12 10:28:15.230: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 10:28:17.461: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 10:28:19.432: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 10:28:21.238: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 10:28:23.484: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 10:28:25.234: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 12 10:28:33.506: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.190 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3816 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:28:33.506: INFO: >>> kubeConfig: /root/.kube/config I0512 10:28:33.537011 7 log.go:172] (0xc002f90580) (0xc001845e00) Create stream I0512 10:28:33.537050 7 log.go:172] (0xc002f90580) (0xc001845e00) Stream added, broadcasting: 1 I0512 10:28:33.538835 7 log.go:172] (0xc002f90580) Reply frame received for 1 I0512 10:28:33.538871 7 log.go:172] (0xc002f90580) (0xc00192af00) Create stream I0512 10:28:33.538885 7 log.go:172] (0xc002f90580) (0xc00192af00) Stream added, broadcasting: 3 I0512 10:28:33.539720 7 log.go:172] (0xc002f90580) Reply frame received for 3 I0512 10:28:33.539750 7 log.go:172] (0xc002f90580) (0xc001d43d60) Create stream I0512 10:28:33.539761 7 log.go:172] (0xc002f90580) (0xc001d43d60) Stream added, broadcasting: 5 I0512 10:28:33.540509 7 log.go:172] (0xc002f90580) Reply frame received for 5 I0512 10:28:34.598081 7 log.go:172] (0xc002f90580) Data frame received for 5 I0512 10:28:34.598126 7 log.go:172] (0xc001d43d60) (5) Data frame handling I0512 10:28:34.598150 7 log.go:172] (0xc002f90580) Data frame received for 3 I0512 10:28:34.598164 7 log.go:172] (0xc00192af00) (3) Data frame handling I0512 10:28:34.598192 7 log.go:172] (0xc00192af00) (3) Data frame sent I0512 10:28:34.598220 7 log.go:172] (0xc002f90580) Data frame received for 3 I0512 10:28:34.598232 7 log.go:172] (0xc00192af00) (3) Data frame handling I0512 10:28:34.599280 7 log.go:172] (0xc002f90580) Data frame received for 1 I0512 10:28:34.599291 7 log.go:172] (0xc001845e00) (1) Data frame handling I0512 10:28:34.599300 7 log.go:172] (0xc001845e00) (1) Data frame sent I0512 10:28:34.599327 7 log.go:172] (0xc002f90580) (0xc001845e00) Stream removed, broadcasting: 1 I0512 10:28:34.599364 7 log.go:172] (0xc002f90580) Go away received I0512 10:28:34.599413 7 log.go:172] (0xc002f90580) (0xc001845e00) Stream removed, broadcasting: 1 I0512 10:28:34.599430 7 log.go:172] (0xc002f90580) (0xc00192af00) Stream removed, broadcasting: 3 I0512 10:28:34.599443 7 log.go:172] (0xc002f90580) (0xc001d43d60) Stream removed, broadcasting: 5 May 12 10:28:34.599: INFO: Found all expected endpoints: [netserver-0] May 12 10:28:34.601: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.26 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3816 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:28:34.601: INFO: >>> kubeConfig: /root/.kube/config I0512 10:28:34.625296 7 log.go:172] (0xc002a7f970) (0xc0018ba780) Create stream I0512 10:28:34.625323 7 log.go:172] (0xc002a7f970) (0xc0018ba780) Stream added, broadcasting: 1 I0512 10:28:34.626714 7 log.go:172] (0xc002a7f970) Reply frame received for 1 I0512 10:28:34.626748 7 log.go:172] (0xc002a7f970) (0xc00192b0e0) Create stream I0512 10:28:34.626762 7 log.go:172] (0xc002a7f970) (0xc00192b0e0) Stream added, broadcasting: 3 I0512 10:28:34.627461 7 log.go:172] (0xc002a7f970) Reply frame received for 3 I0512 10:28:34.627486 7 log.go:172] (0xc002a7f970) (0xc0014ca0a0) Create stream I0512 10:28:34.627496 7 log.go:172] (0xc002a7f970) (0xc0014ca0a0) Stream added, broadcasting: 5 I0512 10:28:34.628064 7 log.go:172] (0xc002a7f970) Reply frame received for 5 I0512 10:28:35.684551 7 log.go:172] (0xc002a7f970) Data frame received for 3 I0512 10:28:35.684584 7 log.go:172] (0xc00192b0e0) (3) Data frame handling I0512 10:28:35.684603 7 log.go:172] (0xc00192b0e0) (3) Data frame sent I0512 10:28:35.685330 7 log.go:172] (0xc002a7f970) Data frame received for 5 I0512 10:28:35.685364 7 log.go:172] (0xc0014ca0a0) (5) Data frame handling I0512 10:28:35.685396 7 log.go:172] (0xc002a7f970) Data frame received for 3 I0512 10:28:35.685415 7 log.go:172] (0xc00192b0e0) (3) Data frame handling I0512 10:28:35.686437 7 log.go:172] (0xc002a7f970) Data frame received for 1 I0512 10:28:35.686464 7 log.go:172] (0xc0018ba780) (1) Data frame handling I0512 10:28:35.686481 7 log.go:172] (0xc0018ba780) (1) Data frame sent I0512 10:28:35.686502 7 log.go:172] (0xc002a7f970) (0xc0018ba780) Stream removed, broadcasting: 1 I0512 10:28:35.686540 7 log.go:172] (0xc002a7f970) Go away received I0512 10:28:35.686577 7 log.go:172] (0xc002a7f970) (0xc0018ba780) Stream removed, broadcasting: 1 I0512 10:28:35.686613 7 log.go:172] (0xc002a7f970) (0xc00192b0e0) Stream removed, broadcasting: 3 I0512 10:28:35.686641 7 log.go:172] (0xc002a7f970) (0xc0014ca0a0) Stream removed, broadcasting: 5 May 12 10:28:35.686: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:28:35.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3816" for this suite. • [SLOW TEST:39.567 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":39,"skipped":607,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:28:35.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 10:28:37.638: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b95c20a-0db1-4791-8236-97ebda3f2760" in namespace "projected-3396" to be "Succeeded or Failed" May 12 10:28:37.688: INFO: Pod "downwardapi-volume-1b95c20a-0db1-4791-8236-97ebda3f2760": Phase="Pending", Reason="", readiness=false. Elapsed: 49.162279ms May 12 10:28:39.821: INFO: Pod "downwardapi-volume-1b95c20a-0db1-4791-8236-97ebda3f2760": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182631618s May 12 10:28:41.856: INFO: Pod "downwardapi-volume-1b95c20a-0db1-4791-8236-97ebda3f2760": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217401923s May 12 10:28:43.931: INFO: Pod "downwardapi-volume-1b95c20a-0db1-4791-8236-97ebda3f2760": Phase="Running", Reason="", readiness=true. Elapsed: 6.29221908s May 12 10:28:46.126: INFO: Pod "downwardapi-volume-1b95c20a-0db1-4791-8236-97ebda3f2760": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.487452893s STEP: Saw pod success May 12 10:28:46.126: INFO: Pod "downwardapi-volume-1b95c20a-0db1-4791-8236-97ebda3f2760" satisfied condition "Succeeded or Failed" May 12 10:28:46.174: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1b95c20a-0db1-4791-8236-97ebda3f2760 container client-container: STEP: delete the pod May 12 10:28:46.695: INFO: Waiting for pod downwardapi-volume-1b95c20a-0db1-4791-8236-97ebda3f2760 to disappear May 12 10:28:46.874: INFO: Pod downwardapi-volume-1b95c20a-0db1-4791-8236-97ebda3f2760 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:28:46.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3396" for this suite. • [SLOW TEST:11.264 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":40,"skipped":616,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:28:46.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 12 10:28:47.614: INFO: Waiting up to 5m0s for pod "downward-api-41d9f9a4-e67d-4c46-b12c-b10da09467cb" in namespace "downward-api-1091" to be "Succeeded or Failed" May 12 10:28:47.677: INFO: Pod "downward-api-41d9f9a4-e67d-4c46-b12c-b10da09467cb": Phase="Pending", Reason="", readiness=false. Elapsed: 62.860004ms May 12 10:28:49.682: INFO: Pod "downward-api-41d9f9a4-e67d-4c46-b12c-b10da09467cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067588183s May 12 10:28:51.726: INFO: Pod "downward-api-41d9f9a4-e67d-4c46-b12c-b10da09467cb": Phase="Running", Reason="", readiness=true. Elapsed: 4.111594591s May 12 10:28:53.730: INFO: Pod "downward-api-41d9f9a4-e67d-4c46-b12c-b10da09467cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115345369s STEP: Saw pod success May 12 10:28:53.730: INFO: Pod "downward-api-41d9f9a4-e67d-4c46-b12c-b10da09467cb" satisfied condition "Succeeded or Failed" May 12 10:28:53.732: INFO: Trying to get logs from node latest-worker pod downward-api-41d9f9a4-e67d-4c46-b12c-b10da09467cb container dapi-container: STEP: delete the pod May 12 10:28:53.794: INFO: Waiting for pod downward-api-41d9f9a4-e67d-4c46-b12c-b10da09467cb to disappear May 12 10:28:53.844: INFO: Pod downward-api-41d9f9a4-e67d-4c46-b12c-b10da09467cb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:28:53.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1091" for this suite. • [SLOW TEST:6.891 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":41,"skipped":634,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:28:53.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 12 10:29:06.739: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5805 PodName:pod-sharedvolume-026ff94e-34fc-47f9-8839-9f64e3152525 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:29:06.739: INFO: >>> kubeConfig: /root/.kube/config I0512 10:29:06.763806 7 log.go:172] (0xc005796580) (0xc00213a5a0) Create stream I0512 10:29:06.763834 7 log.go:172] (0xc005796580) (0xc00213a5a0) Stream added, broadcasting: 1 I0512 10:29:06.765814 7 log.go:172] (0xc005796580) Reply frame received for 1 I0512 10:29:06.765853 7 log.go:172] (0xc005796580) (0xc00258d9a0) Create stream I0512 10:29:06.765865 7 log.go:172] (0xc005796580) (0xc00258d9a0) Stream added, broadcasting: 3 I0512 10:29:06.766822 7 log.go:172] (0xc005796580) Reply frame received for 3 I0512 10:29:06.766857 7 log.go:172] (0xc005796580) (0xc00258db80) Create stream I0512 10:29:06.766872 7 log.go:172] (0xc005796580) (0xc00258db80) Stream added, broadcasting: 5 I0512 10:29:06.767721 7 log.go:172] (0xc005796580) Reply frame received for 5 I0512 10:29:06.842824 7 log.go:172] (0xc005796580) Data frame received for 3 I0512 10:29:06.842855 7 log.go:172] (0xc00258d9a0) (3) Data frame handling I0512 10:29:06.842881 7 log.go:172] (0xc00258d9a0) (3) Data frame sent I0512 10:29:06.842891 7 log.go:172] (0xc005796580) Data frame received for 3 I0512 10:29:06.842897 7 log.go:172] (0xc00258d9a0) (3) Data frame handling I0512 10:29:06.842971 7 log.go:172] (0xc005796580) Data frame received for 5 I0512 10:29:06.842995 7 log.go:172] (0xc00258db80) (5) Data frame handling I0512 10:29:06.844186 7 log.go:172] (0xc005796580) Data frame received for 1 I0512 10:29:06.844206 7 log.go:172] (0xc00213a5a0) (1) Data frame handling I0512 10:29:06.844225 7 log.go:172] (0xc00213a5a0) (1) Data frame sent I0512 10:29:06.844244 7 log.go:172] (0xc005796580) (0xc00213a5a0) Stream removed, broadcasting: 1 I0512 10:29:06.844277 7 log.go:172] (0xc005796580) Go away received I0512 10:29:06.844307 7 log.go:172] (0xc005796580) (0xc00213a5a0) Stream removed, broadcasting: 1 I0512 10:29:06.844329 7 log.go:172] (0xc005796580) (0xc00258d9a0) Stream removed, broadcasting: 3 I0512 10:29:06.844351 7 log.go:172] (0xc005796580) (0xc00258db80) Stream removed, broadcasting: 5 May 12 10:29:06.844: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:29:06.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5805" for this suite. • [SLOW TEST:13.083 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":42,"skipped":646,"failed":0} SSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:29:06.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:29:08.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5873" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":43,"skipped":650,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:29:08.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 10:29:08.955: INFO: Waiting up to 5m0s for pod "pod-48e43cae-fcb8-4716-97ef-4c037a8f5a97" in namespace "emptydir-814" to be "Succeeded or Failed" May 12 10:29:09.038: INFO: Pod "pod-48e43cae-fcb8-4716-97ef-4c037a8f5a97": Phase="Pending", Reason="", readiness=false. Elapsed: 83.929139ms May 12 10:29:11.175: INFO: Pod "pod-48e43cae-fcb8-4716-97ef-4c037a8f5a97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220250104s May 12 10:29:13.180: INFO: Pod "pod-48e43cae-fcb8-4716-97ef-4c037a8f5a97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225598625s May 12 10:29:15.355: INFO: Pod "pod-48e43cae-fcb8-4716-97ef-4c037a8f5a97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.400778623s STEP: Saw pod success May 12 10:29:15.355: INFO: Pod "pod-48e43cae-fcb8-4716-97ef-4c037a8f5a97" satisfied condition "Succeeded or Failed" May 12 10:29:15.358: INFO: Trying to get logs from node latest-worker2 pod pod-48e43cae-fcb8-4716-97ef-4c037a8f5a97 container test-container: STEP: delete the pod May 12 10:29:16.283: INFO: Waiting for pod pod-48e43cae-fcb8-4716-97ef-4c037a8f5a97 to disappear May 12 10:29:16.653: INFO: Pod pod-48e43cae-fcb8-4716-97ef-4c037a8f5a97 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:29:16.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-814" for this suite. • [SLOW TEST:8.414 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":44,"skipped":657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:29:17.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 12 10:29:18.280: INFO: Waiting up to 5m0s for pod "var-expansion-a1dc0bf4-9940-410c-b912-18c70c908dbd" in namespace "var-expansion-8893" to be "Succeeded or Failed" May 12 10:29:18.384: INFO: Pod "var-expansion-a1dc0bf4-9940-410c-b912-18c70c908dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 103.832518ms May 12 10:29:20.410: INFO: Pod "var-expansion-a1dc0bf4-9940-410c-b912-18c70c908dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129734963s May 12 10:29:22.421: INFO: Pod "var-expansion-a1dc0bf4-9940-410c-b912-18c70c908dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140438378s May 12 10:29:25.109: INFO: Pod "var-expansion-a1dc0bf4-9940-410c-b912-18c70c908dbd": Phase="Running", Reason="", readiness=true. Elapsed: 6.82925545s May 12 10:29:27.112: INFO: Pod "var-expansion-a1dc0bf4-9940-410c-b912-18c70c908dbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.831890688s STEP: Saw pod success May 12 10:29:27.112: INFO: Pod "var-expansion-a1dc0bf4-9940-410c-b912-18c70c908dbd" satisfied condition "Succeeded or Failed" May 12 10:29:27.114: INFO: Trying to get logs from node latest-worker2 pod var-expansion-a1dc0bf4-9940-410c-b912-18c70c908dbd container dapi-container: STEP: delete the pod May 12 10:29:27.758: INFO: Waiting for pod var-expansion-a1dc0bf4-9940-410c-b912-18c70c908dbd to disappear May 12 10:29:27.784: INFO: Pod var-expansion-a1dc0bf4-9940-410c-b912-18c70c908dbd no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:29:27.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8893" for this suite. • [SLOW TEST:10.573 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":45,"skipped":685,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:29:27.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-750, will wait for the garbage collector to delete the pods May 12 10:29:38.301: INFO: Deleting Job.batch foo took: 6.778672ms May 12 10:29:39.001: INFO: Terminating Job.batch foo pods took: 700.269952ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:30:13.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-750" for this suite. • [SLOW TEST:46.049 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":46,"skipped":697,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:30:13.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 12 10:30:14.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 12 10:30:14.430: INFO: stderr: "" May 12 10:30:14.430: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:30:14.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8682" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":47,"skipped":754,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:30:14.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 12 10:30:15.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-48' May 12 10:30:15.396: INFO: stderr: "" May 12 10:30:15.396: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 12 10:30:15.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-48' May 12 10:30:24.944: INFO: stderr: "" May 12 10:30:24.944: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:30:24.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-48" for this suite. • [SLOW TEST:10.496 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":48,"skipped":761,"failed":0} SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:30:24.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 12 10:30:41.377: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6622 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:30:41.377: INFO: >>> kubeConfig: /root/.kube/config I0512 10:30:41.417377 7 log.go:172] (0xc0057964d0) (0xc00209b680) Create stream I0512 10:30:41.417404 7 log.go:172] (0xc0057964d0) (0xc00209b680) Stream added, broadcasting: 1 I0512 10:30:41.419305 7 log.go:172] (0xc0057964d0) Reply frame received for 1 I0512 10:30:41.419345 7 log.go:172] (0xc0057964d0) (0xc00213aaa0) Create stream I0512 10:30:41.419360 7 log.go:172] (0xc0057964d0) (0xc00213aaa0) Stream added, broadcasting: 3 I0512 10:30:41.420358 7 log.go:172] (0xc0057964d0) Reply frame received for 3 I0512 10:30:41.420397 7 log.go:172] (0xc0057964d0) (0xc001c1d400) Create stream I0512 10:30:41.420411 7 log.go:172] (0xc0057964d0) (0xc001c1d400) Stream added, broadcasting: 5 I0512 10:30:41.421553 7 log.go:172] (0xc0057964d0) Reply frame received for 5 I0512 10:30:41.510412 7 log.go:172] (0xc0057964d0) Data frame received for 5 I0512 10:30:41.510436 7 log.go:172] (0xc001c1d400) (5) Data frame handling I0512 10:30:41.510455 7 log.go:172] (0xc0057964d0) Data frame received for 3 I0512 10:30:41.510478 7 log.go:172] (0xc00213aaa0) (3) Data frame handling I0512 10:30:41.510501 7 log.go:172] (0xc00213aaa0) (3) Data frame sent I0512 10:30:41.510514 7 log.go:172] (0xc0057964d0) Data frame received for 3 I0512 10:30:41.510522 7 log.go:172] (0xc00213aaa0) (3) Data frame handling I0512 10:30:41.511802 7 log.go:172] (0xc0057964d0) Data frame received for 1 I0512 10:30:41.511820 7 log.go:172] (0xc00209b680) (1) Data frame handling I0512 10:30:41.511850 7 log.go:172] (0xc00209b680) (1) Data frame sent I0512 10:30:41.511867 7 log.go:172] (0xc0057964d0) (0xc00209b680) Stream removed, broadcasting: 1 I0512 10:30:41.511881 7 log.go:172] (0xc0057964d0) Go away received I0512 10:30:41.512037 7 log.go:172] (0xc0057964d0) (0xc00209b680) Stream removed, broadcasting: 1 I0512 10:30:41.512061 7 log.go:172] (0xc0057964d0) (0xc00213aaa0) Stream removed, broadcasting: 3 I0512 10:30:41.512074 7 log.go:172] (0xc0057964d0) (0xc001c1d400) Stream removed, broadcasting: 5 May 12 10:30:41.512: INFO: Exec stderr: "" May 12 10:30:41.512: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6622 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:30:41.512: INFO: >>> kubeConfig: /root/.kube/config I0512 10:30:41.547348 7 log.go:172] (0xc002d19ad0) (0xc0014ca460) Create stream I0512 10:30:41.547384 7 log.go:172] (0xc002d19ad0) (0xc0014ca460) Stream added, broadcasting: 1 I0512 10:30:41.548984 7 log.go:172] (0xc002d19ad0) Reply frame received for 1 I0512 10:30:41.549025 7 log.go:172] (0xc002d19ad0) (0xc001c1d680) Create stream I0512 10:30:41.549040 7 log.go:172] (0xc002d19ad0) (0xc001c1d680) Stream added, broadcasting: 3 I0512 10:30:41.550229 7 log.go:172] (0xc002d19ad0) Reply frame received for 3 I0512 10:30:41.550288 7 log.go:172] (0xc002d19ad0) (0xc0014ca820) Create stream I0512 10:30:41.550313 7 log.go:172] (0xc002d19ad0) (0xc0014ca820) Stream added, broadcasting: 5 I0512 10:30:41.551098 7 log.go:172] (0xc002d19ad0) Reply frame received for 5 I0512 10:30:41.599119 7 log.go:172] (0xc002d19ad0) Data frame received for 5 I0512 10:30:41.599152 7 log.go:172] (0xc002d19ad0) Data frame received for 3 I0512 10:30:41.599184 7 log.go:172] (0xc001c1d680) (3) Data frame handling I0512 10:30:41.599205 7 log.go:172] (0xc001c1d680) (3) Data frame sent I0512 10:30:41.599221 7 log.go:172] (0xc002d19ad0) Data frame received for 3 I0512 10:30:41.599236 7 log.go:172] (0xc001c1d680) (3) Data frame handling I0512 10:30:41.599256 7 log.go:172] (0xc0014ca820) (5) Data frame handling I0512 10:30:41.600258 7 log.go:172] (0xc002d19ad0) Data frame received for 1 I0512 10:30:41.600283 7 log.go:172] (0xc0014ca460) (1) Data frame handling I0512 10:30:41.600298 7 log.go:172] (0xc0014ca460) (1) Data frame sent I0512 10:30:41.600317 7 log.go:172] (0xc002d19ad0) (0xc0014ca460) Stream removed, broadcasting: 1 I0512 10:30:41.600341 7 log.go:172] (0xc002d19ad0) Go away received I0512 10:30:41.600435 7 log.go:172] (0xc002d19ad0) (0xc0014ca460) Stream removed, broadcasting: 1 I0512 10:30:41.600453 7 log.go:172] (0xc002d19ad0) (0xc001c1d680) Stream removed, broadcasting: 3 I0512 10:30:41.600469 7 log.go:172] (0xc002d19ad0) (0xc0014ca820) Stream removed, broadcasting: 5 May 12 10:30:41.600: INFO: Exec stderr: "" May 12 10:30:41.600: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6622 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:30:41.600: INFO: >>> kubeConfig: /root/.kube/config I0512 10:30:41.626484 7 log.go:172] (0xc001622160) (0xc0014cab40) Create stream I0512 10:30:41.626518 7 log.go:172] (0xc001622160) (0xc0014cab40) Stream added, broadcasting: 1 I0512 10:30:41.627839 7 log.go:172] (0xc001622160) Reply frame received for 1 I0512 10:30:41.627865 7 log.go:172] (0xc001622160) (0xc0021dd400) Create stream I0512 10:30:41.627874 7 log.go:172] (0xc001622160) (0xc0021dd400) Stream added, broadcasting: 3 I0512 10:30:41.628585 7 log.go:172] (0xc001622160) Reply frame received for 3 I0512 10:30:41.628623 7 log.go:172] (0xc001622160) (0xc0014cabe0) Create stream I0512 10:30:41.628640 7 log.go:172] (0xc001622160) (0xc0014cabe0) Stream added, broadcasting: 5 I0512 10:30:41.629655 7 log.go:172] (0xc001622160) Reply frame received for 5 I0512 10:30:41.692537 7 log.go:172] (0xc001622160) Data frame received for 3 I0512 10:30:41.692577 7 log.go:172] (0xc0021dd400) (3) Data frame handling I0512 10:30:41.692591 7 log.go:172] (0xc0021dd400) (3) Data frame sent I0512 10:30:41.692609 7 log.go:172] (0xc001622160) Data frame received for 3 I0512 10:30:41.692639 7 log.go:172] (0xc0021dd400) (3) Data frame handling I0512 10:30:41.692788 7 log.go:172] (0xc001622160) Data frame received for 5 I0512 10:30:41.692816 7 log.go:172] (0xc0014cabe0) (5) Data frame handling I0512 10:30:41.694681 7 log.go:172] (0xc001622160) Data frame received for 1 I0512 10:30:41.694715 7 log.go:172] (0xc0014cab40) (1) Data frame handling I0512 10:30:41.694748 7 log.go:172] (0xc0014cab40) (1) Data frame sent I0512 10:30:41.694776 7 log.go:172] (0xc001622160) (0xc0014cab40) Stream removed, broadcasting: 1 I0512 10:30:41.694805 7 log.go:172] (0xc001622160) Go away received I0512 10:30:41.694879 7 log.go:172] (0xc001622160) (0xc0014cab40) Stream removed, broadcasting: 1 I0512 10:30:41.694892 7 log.go:172] (0xc001622160) (0xc0021dd400) Stream removed, broadcasting: 3 I0512 10:30:41.694899 7 log.go:172] (0xc001622160) (0xc0014cabe0) Stream removed, broadcasting: 5 May 12 10:30:41.694: INFO: Exec stderr: "" May 12 10:30:41.694: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6622 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:30:41.694: INFO: >>> kubeConfig: /root/.kube/config I0512 10:30:41.719405 7 log.go:172] (0xc001622790) (0xc0014cb0e0) Create stream I0512 10:30:41.719424 7 log.go:172] (0xc001622790) (0xc0014cb0e0) Stream added, broadcasting: 1 I0512 10:30:41.720489 7 log.go:172] (0xc001622790) Reply frame received for 1 I0512 10:30:41.720529 7 log.go:172] (0xc001622790) (0xc0014cb180) Create stream I0512 10:30:41.720540 7 log.go:172] (0xc001622790) (0xc0014cb180) Stream added, broadcasting: 3 I0512 10:30:41.721273 7 log.go:172] (0xc001622790) Reply frame received for 3 I0512 10:30:41.721300 7 log.go:172] (0xc001622790) (0xc0014cb360) Create stream I0512 10:30:41.721309 7 log.go:172] (0xc001622790) (0xc0014cb360) Stream added, broadcasting: 5 I0512 10:30:41.721872 7 log.go:172] (0xc001622790) Reply frame received for 5 I0512 10:30:41.767967 7 log.go:172] (0xc001622790) Data frame received for 3 I0512 10:30:41.768014 7 log.go:172] (0xc0014cb180) (3) Data frame handling I0512 10:30:41.768041 7 log.go:172] (0xc0014cb180) (3) Data frame sent I0512 10:30:41.768053 7 log.go:172] (0xc001622790) Data frame received for 3 I0512 10:30:41.768063 7 log.go:172] (0xc0014cb180) (3) Data frame handling I0512 10:30:41.768104 7 log.go:172] (0xc001622790) Data frame received for 5 I0512 10:30:41.768134 7 log.go:172] (0xc0014cb360) (5) Data frame handling I0512 10:30:41.769328 7 log.go:172] (0xc001622790) Data frame received for 1 I0512 10:30:41.769369 7 log.go:172] (0xc0014cb0e0) (1) Data frame handling I0512 10:30:41.769395 7 log.go:172] (0xc0014cb0e0) (1) Data frame sent I0512 10:30:41.769428 7 log.go:172] (0xc001622790) (0xc0014cb0e0) Stream removed, broadcasting: 1 I0512 10:30:41.769458 7 log.go:172] (0xc001622790) Go away received I0512 10:30:41.769522 7 log.go:172] (0xc001622790) (0xc0014cb0e0) Stream removed, broadcasting: 1 I0512 10:30:41.769537 7 log.go:172] (0xc001622790) (0xc0014cb180) Stream removed, broadcasting: 3 I0512 10:30:41.769546 7 log.go:172] (0xc001622790) (0xc0014cb360) Stream removed, broadcasting: 5 May 12 10:30:41.769: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 12 10:30:41.769: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6622 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:30:41.769: INFO: >>> kubeConfig: /root/.kube/config I0512 10:30:41.796763 7 log.go:172] (0xc001622dc0) (0xc0014cbae0) Create stream I0512 10:30:41.796795 7 log.go:172] (0xc001622dc0) (0xc0014cbae0) Stream added, broadcasting: 1 I0512 10:30:41.798483 7 log.go:172] (0xc001622dc0) Reply frame received for 1 I0512 10:30:41.798512 7 log.go:172] (0xc001622dc0) (0xc0014cbb80) Create stream I0512 10:30:41.798521 7 log.go:172] (0xc001622dc0) (0xc0014cbb80) Stream added, broadcasting: 3 I0512 10:30:41.799216 7 log.go:172] (0xc001622dc0) Reply frame received for 3 I0512 10:30:41.799243 7 log.go:172] (0xc001622dc0) (0xc00209b720) Create stream I0512 10:30:41.799259 7 log.go:172] (0xc001622dc0) (0xc00209b720) Stream added, broadcasting: 5 I0512 10:30:41.799896 7 log.go:172] (0xc001622dc0) Reply frame received for 5 I0512 10:30:41.853755 7 log.go:172] (0xc001622dc0) Data frame received for 5 I0512 10:30:41.853791 7 log.go:172] (0xc001622dc0) Data frame received for 3 I0512 10:30:41.853823 7 log.go:172] (0xc0014cbb80) (3) Data frame handling I0512 10:30:41.853852 7 log.go:172] (0xc0014cbb80) (3) Data frame sent I0512 10:30:41.853862 7 log.go:172] (0xc001622dc0) Data frame received for 3 I0512 10:30:41.853869 7 log.go:172] (0xc0014cbb80) (3) Data frame handling I0512 10:30:41.853893 7 log.go:172] (0xc00209b720) (5) Data frame handling I0512 10:30:41.854960 7 log.go:172] (0xc001622dc0) Data frame received for 1 I0512 10:30:41.854974 7 log.go:172] (0xc0014cbae0) (1) Data frame handling I0512 10:30:41.854983 7 log.go:172] (0xc0014cbae0) (1) Data frame sent I0512 10:30:41.854993 7 log.go:172] (0xc001622dc0) (0xc0014cbae0) Stream removed, broadcasting: 1 I0512 10:30:41.855001 7 log.go:172] (0xc001622dc0) Go away received I0512 10:30:41.855130 7 log.go:172] (0xc001622dc0) (0xc0014cbae0) Stream removed, broadcasting: 1 I0512 10:30:41.855147 7 log.go:172] (0xc001622dc0) (0xc0014cbb80) Stream removed, broadcasting: 3 I0512 10:30:41.855160 7 log.go:172] (0xc001622dc0) (0xc00209b720) Stream removed, broadcasting: 5 May 12 10:30:41.855: INFO: Exec stderr: "" May 12 10:30:41.855: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6622 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:30:41.855: INFO: >>> kubeConfig: /root/.kube/config I0512 10:30:41.890571 7 log.go:172] (0xc001623130) (0xc0018ba0a0) Create stream I0512 10:30:41.890607 7 log.go:172] (0xc001623130) (0xc0018ba0a0) Stream added, broadcasting: 1 I0512 10:30:41.896914 7 log.go:172] (0xc001623130) Reply frame received for 1 I0512 10:30:41.896959 7 log.go:172] (0xc001623130) (0xc00209b860) Create stream I0512 10:30:41.896973 7 log.go:172] (0xc001623130) (0xc00209b860) Stream added, broadcasting: 3 I0512 10:30:41.898169 7 log.go:172] (0xc001623130) Reply frame received for 3 I0512 10:30:41.898215 7 log.go:172] (0xc001623130) (0xc00213ac80) Create stream I0512 10:30:41.898224 7 log.go:172] (0xc001623130) (0xc00213ac80) Stream added, broadcasting: 5 I0512 10:30:41.899153 7 log.go:172] (0xc001623130) Reply frame received for 5 I0512 10:30:41.958323 7 log.go:172] (0xc001623130) Data frame received for 5 I0512 10:30:41.958358 7 log.go:172] (0xc00213ac80) (5) Data frame handling I0512 10:30:41.958380 7 log.go:172] (0xc001623130) Data frame received for 3 I0512 10:30:41.958391 7 log.go:172] (0xc00209b860) (3) Data frame handling I0512 10:30:41.958402 7 log.go:172] (0xc00209b860) (3) Data frame sent I0512 10:30:41.958412 7 log.go:172] (0xc001623130) Data frame received for 3 I0512 10:30:41.958422 7 log.go:172] (0xc00209b860) (3) Data frame handling I0512 10:30:41.959793 7 log.go:172] (0xc001623130) Data frame received for 1 I0512 10:30:41.959839 7 log.go:172] (0xc0018ba0a0) (1) Data frame handling I0512 10:30:41.959876 7 log.go:172] (0xc0018ba0a0) (1) Data frame sent I0512 10:30:41.959917 7 log.go:172] (0xc001623130) (0xc0018ba0a0) Stream removed, broadcasting: 1 I0512 10:30:41.960011 7 log.go:172] (0xc001623130) (0xc0018ba0a0) Stream removed, broadcasting: 1 I0512 10:30:41.960038 7 log.go:172] (0xc001623130) (0xc00209b860) Stream removed, broadcasting: 3 I0512 10:30:41.960139 7 log.go:172] (0xc001623130) Go away received I0512 10:30:41.960254 7 log.go:172] (0xc001623130) (0xc00213ac80) Stream removed, broadcasting: 5 May 12 10:30:41.960: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 12 10:30:41.960: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6622 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:30:41.960: INFO: >>> kubeConfig: /root/.kube/config I0512 10:30:41.988215 7 log.go:172] (0xc00092c630) (0xc00213b040) Create stream I0512 10:30:41.988245 7 log.go:172] (0xc00092c630) (0xc00213b040) Stream added, broadcasting: 1 I0512 10:30:41.994253 7 log.go:172] (0xc00092c630) Reply frame received for 1 I0512 10:30:41.994322 7 log.go:172] (0xc00092c630) (0xc00213b0e0) Create stream I0512 10:30:41.994346 7 log.go:172] (0xc00092c630) (0xc00213b0e0) Stream added, broadcasting: 3 I0512 10:30:41.996413 7 log.go:172] (0xc00092c630) Reply frame received for 3 I0512 10:30:41.996459 7 log.go:172] (0xc00092c630) (0xc00209ba40) Create stream I0512 10:30:41.996475 7 log.go:172] (0xc00092c630) (0xc00209ba40) Stream added, broadcasting: 5 I0512 10:30:41.997436 7 log.go:172] (0xc00092c630) Reply frame received for 5 I0512 10:30:42.057915 7 log.go:172] (0xc00092c630) Data frame received for 5 I0512 10:30:42.057948 7 log.go:172] (0xc00209ba40) (5) Data frame handling I0512 10:30:42.057968 7 log.go:172] (0xc00092c630) Data frame received for 3 I0512 10:30:42.057980 7 log.go:172] (0xc00213b0e0) (3) Data frame handling I0512 10:30:42.057994 7 log.go:172] (0xc00213b0e0) (3) Data frame sent I0512 10:30:42.058014 7 log.go:172] (0xc00092c630) Data frame received for 3 I0512 10:30:42.058027 7 log.go:172] (0xc00213b0e0) (3) Data frame handling I0512 10:30:42.058990 7 log.go:172] (0xc00092c630) Data frame received for 1 I0512 10:30:42.059016 7 log.go:172] (0xc00213b040) (1) Data frame handling I0512 10:30:42.059035 7 log.go:172] (0xc00213b040) (1) Data frame sent I0512 10:30:42.059065 7 log.go:172] (0xc00092c630) (0xc00213b040) Stream removed, broadcasting: 1 I0512 10:30:42.059100 7 log.go:172] (0xc00092c630) Go away received I0512 10:30:42.059194 7 log.go:172] (0xc00092c630) (0xc00213b040) Stream removed, broadcasting: 1 I0512 10:30:42.059221 7 log.go:172] (0xc00092c630) (0xc00213b0e0) Stream removed, broadcasting: 3 I0512 10:30:42.059243 7 log.go:172] (0xc00092c630) (0xc00209ba40) Stream removed, broadcasting: 5 May 12 10:30:42.059: INFO: Exec stderr: "" May 12 10:30:42.059: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6622 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:30:42.059: INFO: >>> kubeConfig: /root/.kube/config I0512 10:30:42.085464 7 log.go:172] (0xc005796dc0) (0xc00209bcc0) Create stream I0512 10:30:42.085493 7 log.go:172] (0xc005796dc0) (0xc00209bcc0) Stream added, broadcasting: 1 I0512 10:30:42.086941 7 log.go:172] (0xc005796dc0) Reply frame received for 1 I0512 10:30:42.086973 7 log.go:172] (0xc005796dc0) (0xc0018ba280) Create stream I0512 10:30:42.086992 7 log.go:172] (0xc005796dc0) (0xc0018ba280) Stream added, broadcasting: 3 I0512 10:30:42.087677 7 log.go:172] (0xc005796dc0) Reply frame received for 3 I0512 10:30:42.087703 7 log.go:172] (0xc005796dc0) (0xc001c1d720) Create stream I0512 10:30:42.087715 7 log.go:172] (0xc005796dc0) (0xc001c1d720) Stream added, broadcasting: 5 I0512 10:30:42.088553 7 log.go:172] (0xc005796dc0) Reply frame received for 5 I0512 10:30:42.155065 7 log.go:172] (0xc005796dc0) Data frame received for 3 I0512 10:30:42.155085 7 log.go:172] (0xc0018ba280) (3) Data frame handling I0512 10:30:42.155099 7 log.go:172] (0xc0018ba280) (3) Data frame sent I0512 10:30:42.155418 7 log.go:172] (0xc005796dc0) Data frame received for 5 I0512 10:30:42.155439 7 log.go:172] (0xc001c1d720) (5) Data frame handling I0512 10:30:42.155476 7 log.go:172] (0xc005796dc0) Data frame received for 3 I0512 10:30:42.155497 7 log.go:172] (0xc0018ba280) (3) Data frame handling I0512 10:30:42.156399 7 log.go:172] (0xc005796dc0) Data frame received for 1 I0512 10:30:42.156423 7 log.go:172] (0xc00209bcc0) (1) Data frame handling I0512 10:30:42.156441 7 log.go:172] (0xc00209bcc0) (1) Data frame sent I0512 10:30:42.156454 7 log.go:172] (0xc005796dc0) (0xc00209bcc0) Stream removed, broadcasting: 1 I0512 10:30:42.156525 7 log.go:172] (0xc005796dc0) Go away received I0512 10:30:42.156560 7 log.go:172] (0xc005796dc0) (0xc00209bcc0) Stream removed, broadcasting: 1 I0512 10:30:42.156586 7 log.go:172] (0xc005796dc0) (0xc0018ba280) Stream removed, broadcasting: 3 I0512 10:30:42.156597 7 log.go:172] (0xc005796dc0) (0xc001c1d720) Stream removed, broadcasting: 5 May 12 10:30:42.156: INFO: Exec stderr: "" May 12 10:30:42.156: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6622 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:30:42.156: INFO: >>> kubeConfig: /root/.kube/config I0512 10:30:42.181380 7 log.go:172] (0xc005797340) (0xc00209bea0) Create stream I0512 10:30:42.181410 7 log.go:172] (0xc005797340) (0xc00209bea0) Stream added, broadcasting: 1 I0512 10:30:42.182757 7 log.go:172] (0xc005797340) Reply frame received for 1 I0512 10:30:42.182790 7 log.go:172] (0xc005797340) (0xc0021dd4a0) Create stream I0512 10:30:42.182799 7 log.go:172] (0xc005797340) (0xc0021dd4a0) Stream added, broadcasting: 3 I0512 10:30:42.183603 7 log.go:172] (0xc005797340) Reply frame received for 3 I0512 10:30:42.183628 7 log.go:172] (0xc005797340) (0xc001c1d7c0) Create stream I0512 10:30:42.183637 7 log.go:172] (0xc005797340) (0xc001c1d7c0) Stream added, broadcasting: 5 I0512 10:30:42.184419 7 log.go:172] (0xc005797340) Reply frame received for 5 I0512 10:30:42.307407 7 log.go:172] (0xc005797340) Data frame received for 3 I0512 10:30:42.307436 7 log.go:172] (0xc0021dd4a0) (3) Data frame handling I0512 10:30:42.307455 7 log.go:172] (0xc0021dd4a0) (3) Data frame sent I0512 10:30:42.307531 7 log.go:172] (0xc005797340) Data frame received for 5 I0512 10:30:42.307550 7 log.go:172] (0xc001c1d7c0) (5) Data frame handling I0512 10:30:42.307570 7 log.go:172] (0xc005797340) Data frame received for 3 I0512 10:30:42.307590 7 log.go:172] (0xc0021dd4a0) (3) Data frame handling I0512 10:30:42.308660 7 log.go:172] (0xc005797340) Data frame received for 1 I0512 10:30:42.308675 7 log.go:172] (0xc00209bea0) (1) Data frame handling I0512 10:30:42.308686 7 log.go:172] (0xc00209bea0) (1) Data frame sent I0512 10:30:42.308695 7 log.go:172] (0xc005797340) (0xc00209bea0) Stream removed, broadcasting: 1 I0512 10:30:42.308744 7 log.go:172] (0xc005797340) (0xc00209bea0) Stream removed, broadcasting: 1 I0512 10:30:42.308753 7 log.go:172] (0xc005797340) (0xc0021dd4a0) Stream removed, broadcasting: 3 I0512 10:30:42.308761 7 log.go:172] (0xc005797340) (0xc001c1d7c0) Stream removed, broadcasting: 5 May 12 10:30:42.308: INFO: Exec stderr: "" May 12 10:30:42.308: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6622 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:30:42.308: INFO: >>> kubeConfig: /root/.kube/config I0512 10:30:42.309885 7 log.go:172] (0xc005797340) Go away received I0512 10:30:42.331408 7 log.go:172] (0xc005797760) (0xc00192a140) Create stream I0512 10:30:42.331432 7 log.go:172] (0xc005797760) (0xc00192a140) Stream added, broadcasting: 1 I0512 10:30:42.333642 7 log.go:172] (0xc005797760) Reply frame received for 1 I0512 10:30:42.333675 7 log.go:172] (0xc005797760) (0xc00213b180) Create stream I0512 10:30:42.333689 7 log.go:172] (0xc005797760) (0xc00213b180) Stream added, broadcasting: 3 I0512 10:30:42.334583 7 log.go:172] (0xc005797760) Reply frame received for 3 I0512 10:30:42.334620 7 log.go:172] (0xc005797760) (0xc001c1d860) Create stream I0512 10:30:42.334633 7 log.go:172] (0xc005797760) (0xc001c1d860) Stream added, broadcasting: 5 I0512 10:30:42.335583 7 log.go:172] (0xc005797760) Reply frame received for 5 I0512 10:30:42.389644 7 log.go:172] (0xc005797760) Data frame received for 5 I0512 10:30:42.389686 7 log.go:172] (0xc001c1d860) (5) Data frame handling I0512 10:30:42.389715 7 log.go:172] (0xc005797760) Data frame received for 3 I0512 10:30:42.389743 7 log.go:172] (0xc00213b180) (3) Data frame handling I0512 10:30:42.389767 7 log.go:172] (0xc00213b180) (3) Data frame sent I0512 10:30:42.389783 7 log.go:172] (0xc005797760) Data frame received for 3 I0512 10:30:42.389793 7 log.go:172] (0xc00213b180) (3) Data frame handling I0512 10:30:42.391418 7 log.go:172] (0xc005797760) Data frame received for 1 I0512 10:30:42.391446 7 log.go:172] (0xc00192a140) (1) Data frame handling I0512 10:30:42.391461 7 log.go:172] (0xc00192a140) (1) Data frame sent I0512 10:30:42.391487 7 log.go:172] (0xc005797760) (0xc00192a140) Stream removed, broadcasting: 1 I0512 10:30:42.391506 7 log.go:172] (0xc005797760) Go away received I0512 10:30:42.391611 7 log.go:172] (0xc005797760) (0xc00192a140) Stream removed, broadcasting: 1 I0512 10:30:42.391640 7 log.go:172] (0xc005797760) (0xc00213b180) Stream removed, broadcasting: 3 I0512 10:30:42.391656 7 log.go:172] (0xc005797760) (0xc001c1d860) Stream removed, broadcasting: 5 May 12 10:30:42.391: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:30:42.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6622" for this suite. • [SLOW TEST:18.124 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":49,"skipped":766,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:30:43.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:30:43.377: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 12 10:30:45.537: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:30:46.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6899" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":50,"skipped":771,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:30:46.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-73eab1a6-b9e9-419f-b19b-0c12b0798a87 STEP: Creating a pod to test consume secrets May 12 10:30:47.134: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1ce62072-3c59-4e5a-94a5-092468558255" in namespace "projected-8928" to be "Succeeded or Failed" May 12 10:30:47.463: INFO: Pod "pod-projected-secrets-1ce62072-3c59-4e5a-94a5-092468558255": Phase="Pending", Reason="", readiness=false. Elapsed: 328.691382ms May 12 10:30:49.655: INFO: Pod "pod-projected-secrets-1ce62072-3c59-4e5a-94a5-092468558255": Phase="Pending", Reason="", readiness=false. Elapsed: 2.520785729s May 12 10:30:52.151: INFO: Pod "pod-projected-secrets-1ce62072-3c59-4e5a-94a5-092468558255": Phase="Pending", Reason="", readiness=false. Elapsed: 5.016362622s May 12 10:30:54.199: INFO: Pod "pod-projected-secrets-1ce62072-3c59-4e5a-94a5-092468558255": Phase="Pending", Reason="", readiness=false. Elapsed: 7.06499087s May 12 10:30:56.325: INFO: Pod "pod-projected-secrets-1ce62072-3c59-4e5a-94a5-092468558255": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.190505593s STEP: Saw pod success May 12 10:30:56.325: INFO: Pod "pod-projected-secrets-1ce62072-3c59-4e5a-94a5-092468558255" satisfied condition "Succeeded or Failed" May 12 10:30:56.328: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-1ce62072-3c59-4e5a-94a5-092468558255 container projected-secret-volume-test: STEP: delete the pod May 12 10:30:56.971: INFO: Waiting for pod pod-projected-secrets-1ce62072-3c59-4e5a-94a5-092468558255 to disappear May 12 10:30:57.229: INFO: Pod pod-projected-secrets-1ce62072-3c59-4e5a-94a5-092468558255 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:30:57.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8928" for this suite. • [SLOW TEST:10.679 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":51,"skipped":772,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:30:57.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1583 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 10:30:58.121: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 12 10:31:00.014: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:31:02.151: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:31:04.085: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:31:06.018: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:31:08.421: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:31:10.337: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:31:12.195: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:31:14.017: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:31:16.145: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:31:18.017: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:31:20.017: INFO: The status of Pod netserver-0 is Running (Ready = true) May 12 10:31:20.022: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 10:31:22.169: INFO: The status of Pod netserver-1 is Running (Ready = false) May 12 10:31:24.035: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 12 10:31:32.608: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.196:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1583 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:31:32.608: INFO: >>> kubeConfig: /root/.kube/config I0512 10:31:32.637033 7 log.go:172] (0xc00092d1e0) (0xc001548280) Create stream I0512 10:31:32.637062 7 log.go:172] (0xc00092d1e0) (0xc001548280) Stream added, broadcasting: 1 I0512 10:31:32.638563 7 log.go:172] (0xc00092d1e0) Reply frame received for 1 I0512 10:31:32.638601 7 log.go:172] (0xc00092d1e0) (0xc001d42b40) Create stream I0512 10:31:32.638620 7 log.go:172] (0xc00092d1e0) (0xc001d42b40) Stream added, broadcasting: 3 I0512 10:31:32.639527 7 log.go:172] (0xc00092d1e0) Reply frame received for 3 I0512 10:31:32.639543 7 log.go:172] (0xc00092d1e0) (0xc001548320) Create stream I0512 10:31:32.639550 7 log.go:172] (0xc00092d1e0) (0xc001548320) Stream added, broadcasting: 5 I0512 10:31:32.640450 7 log.go:172] (0xc00092d1e0) Reply frame received for 5 I0512 10:31:32.710786 7 log.go:172] (0xc00092d1e0) Data frame received for 3 I0512 10:31:32.710810 7 log.go:172] (0xc001d42b40) (3) Data frame handling I0512 10:31:32.710825 7 log.go:172] (0xc001d42b40) (3) Data frame sent I0512 10:31:32.711136 7 log.go:172] (0xc00092d1e0) Data frame received for 3 I0512 10:31:32.711158 7 log.go:172] (0xc001d42b40) (3) Data frame handling I0512 10:31:32.711175 7 log.go:172] (0xc00092d1e0) Data frame received for 5 I0512 10:31:32.711184 7 log.go:172] (0xc001548320) (5) Data frame handling I0512 10:31:32.712865 7 log.go:172] (0xc00092d1e0) Data frame received for 1 I0512 10:31:32.712883 7 log.go:172] (0xc001548280) (1) Data frame handling I0512 10:31:32.712892 7 log.go:172] (0xc001548280) (1) Data frame sent I0512 10:31:32.712904 7 log.go:172] (0xc00092d1e0) (0xc001548280) Stream removed, broadcasting: 1 I0512 10:31:32.712915 7 log.go:172] (0xc00092d1e0) Go away received I0512 10:31:32.713011 7 log.go:172] (0xc00092d1e0) (0xc001548280) Stream removed, broadcasting: 1 I0512 10:31:32.713041 7 log.go:172] (0xc00092d1e0) (0xc001d42b40) Stream removed, broadcasting: 3 I0512 10:31:32.713059 7 log.go:172] (0xc00092d1e0) (0xc001548320) Stream removed, broadcasting: 5 May 12 10:31:32.713: INFO: Found all expected endpoints: [netserver-0] May 12 10:31:32.804: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.34:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1583 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:31:32.804: INFO: >>> kubeConfig: /root/.kube/config I0512 10:31:32.832074 7 log.go:172] (0xc002878000) (0xc001036320) Create stream I0512 10:31:32.832104 7 log.go:172] (0xc002878000) (0xc001036320) Stream added, broadcasting: 1 I0512 10:31:32.833905 7 log.go:172] (0xc002878000) Reply frame received for 1 I0512 10:31:32.833939 7 log.go:172] (0xc002878000) (0xc0018443c0) Create stream I0512 10:31:32.833952 7 log.go:172] (0xc002878000) (0xc0018443c0) Stream added, broadcasting: 3 I0512 10:31:32.834867 7 log.go:172] (0xc002878000) Reply frame received for 3 I0512 10:31:32.834905 7 log.go:172] (0xc002878000) (0xc001d42be0) Create stream I0512 10:31:32.834918 7 log.go:172] (0xc002878000) (0xc001d42be0) Stream added, broadcasting: 5 I0512 10:31:32.836175 7 log.go:172] (0xc002878000) Reply frame received for 5 I0512 10:31:32.902908 7 log.go:172] (0xc002878000) Data frame received for 3 I0512 10:31:32.902947 7 log.go:172] (0xc0018443c0) (3) Data frame handling I0512 10:31:32.902965 7 log.go:172] (0xc0018443c0) (3) Data frame sent I0512 10:31:32.902979 7 log.go:172] (0xc002878000) Data frame received for 3 I0512 10:31:32.902993 7 log.go:172] (0xc0018443c0) (3) Data frame handling I0512 10:31:32.903016 7 log.go:172] (0xc002878000) Data frame received for 5 I0512 10:31:32.903030 7 log.go:172] (0xc001d42be0) (5) Data frame handling I0512 10:31:32.904092 7 log.go:172] (0xc002878000) Data frame received for 1 I0512 10:31:32.904107 7 log.go:172] (0xc001036320) (1) Data frame handling I0512 10:31:32.904116 7 log.go:172] (0xc001036320) (1) Data frame sent I0512 10:31:32.904127 7 log.go:172] (0xc002878000) (0xc001036320) Stream removed, broadcasting: 1 I0512 10:31:32.904139 7 log.go:172] (0xc002878000) Go away received I0512 10:31:32.904209 7 log.go:172] (0xc002878000) (0xc001036320) Stream removed, broadcasting: 1 I0512 10:31:32.904234 7 log.go:172] (0xc002878000) (0xc0018443c0) Stream removed, broadcasting: 3 I0512 10:31:32.904246 7 log.go:172] (0xc002878000) (0xc001d42be0) Stream removed, broadcasting: 5 May 12 10:31:32.904: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:31:32.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1583" for this suite. • [SLOW TEST:35.671 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":52,"skipped":809,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:31:32.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:31:33.129: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:31:43.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9863" for this suite. • [SLOW TEST:10.182 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":53,"skipped":819,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:31:43.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 10:31:43.188: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b5c4e62-30b5-4252-9216-47566697afdb" in namespace "downward-api-8975" to be "Succeeded or Failed" May 12 10:31:43.208: INFO: Pod "downwardapi-volume-4b5c4e62-30b5-4252-9216-47566697afdb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.248485ms May 12 10:31:45.223: INFO: Pod "downwardapi-volume-4b5c4e62-30b5-4252-9216-47566697afdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034617598s May 12 10:31:47.226: INFO: Pod "downwardapi-volume-4b5c4e62-30b5-4252-9216-47566697afdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037582635s May 12 10:31:49.235: INFO: Pod "downwardapi-volume-4b5c4e62-30b5-4252-9216-47566697afdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046248877s STEP: Saw pod success May 12 10:31:49.235: INFO: Pod "downwardapi-volume-4b5c4e62-30b5-4252-9216-47566697afdb" satisfied condition "Succeeded or Failed" May 12 10:31:49.237: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4b5c4e62-30b5-4252-9216-47566697afdb container client-container: STEP: delete the pod May 12 10:31:49.466: INFO: Waiting for pod downwardapi-volume-4b5c4e62-30b5-4252-9216-47566697afdb to disappear May 12 10:31:49.506: INFO: Pod downwardapi-volume-4b5c4e62-30b5-4252-9216-47566697afdb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:31:49.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8975" for this suite. • [SLOW TEST:6.417 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":54,"skipped":883,"failed":0} SSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:31:49.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5956 STEP: creating service affinity-clusterip-transition in namespace services-5956 STEP: creating replication controller affinity-clusterip-transition in namespace services-5956 I0512 10:31:50.234510 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-5956, replica count: 3 I0512 10:31:53.284908 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:31:56.285267 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:31:59.285539 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 10:31:59.448: INFO: Creating new exec pod May 12 10:32:06.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5956 execpod-affinityb4zvg -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 12 10:32:06.890: INFO: stderr: "I0512 10:32:06.805353 650 log.go:172] (0xc000b0f290) (0xc000b5a5a0) Create stream\nI0512 10:32:06.805399 650 log.go:172] (0xc000b0f290) (0xc000b5a5a0) Stream added, broadcasting: 1\nI0512 10:32:06.808774 650 log.go:172] (0xc000b0f290) Reply frame received for 1\nI0512 10:32:06.808827 650 log.go:172] (0xc000b0f290) (0xc0006ec960) Create stream\nI0512 10:32:06.808844 650 log.go:172] (0xc000b0f290) (0xc0006ec960) Stream added, broadcasting: 3\nI0512 10:32:06.809651 650 log.go:172] (0xc000b0f290) Reply frame received for 3\nI0512 10:32:06.809670 650 log.go:172] (0xc000b0f290) (0xc0006debe0) Create stream\nI0512 10:32:06.809676 650 log.go:172] (0xc000b0f290) (0xc0006debe0) Stream added, broadcasting: 5\nI0512 10:32:06.810290 650 log.go:172] (0xc000b0f290) Reply frame received for 5\nI0512 10:32:06.883036 650 log.go:172] (0xc000b0f290) Data frame received for 5\nI0512 10:32:06.883073 650 log.go:172] (0xc0006debe0) (5) Data frame handling\nI0512 10:32:06.883107 650 log.go:172] (0xc0006debe0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0512 10:32:06.883545 650 log.go:172] (0xc000b0f290) Data frame received for 5\nI0512 10:32:06.883565 650 log.go:172] (0xc0006debe0) (5) Data frame handling\nI0512 10:32:06.883580 650 log.go:172] (0xc0006debe0) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0512 10:32:06.883811 650 log.go:172] (0xc000b0f290) Data frame received for 5\nI0512 10:32:06.883836 650 log.go:172] (0xc0006debe0) (5) Data frame handling\nI0512 10:32:06.883984 650 log.go:172] (0xc000b0f290) Data frame received for 3\nI0512 10:32:06.884001 650 log.go:172] (0xc0006ec960) (3) Data frame handling\nI0512 10:32:06.885815 650 log.go:172] (0xc000b0f290) Data frame received for 1\nI0512 10:32:06.885832 650 log.go:172] (0xc000b5a5a0) (1) Data frame handling\nI0512 10:32:06.885841 650 log.go:172] (0xc000b5a5a0) (1) Data frame sent\nI0512 10:32:06.885858 650 log.go:172] (0xc000b0f290) (0xc000b5a5a0) Stream removed, broadcasting: 1\nI0512 10:32:06.885991 650 log.go:172] (0xc000b0f290) Go away received\nI0512 10:32:06.886103 650 log.go:172] (0xc000b0f290) (0xc000b5a5a0) Stream removed, broadcasting: 1\nI0512 10:32:06.886118 650 log.go:172] (0xc000b0f290) (0xc0006ec960) Stream removed, broadcasting: 3\nI0512 10:32:06.886126 650 log.go:172] (0xc000b0f290) (0xc0006debe0) Stream removed, broadcasting: 5\n" May 12 10:32:06.890: INFO: stdout: "" May 12 10:32:06.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5956 execpod-affinityb4zvg -- /bin/sh -x -c nc -zv -t -w 2 10.98.190.174 80' May 12 10:32:07.077: INFO: stderr: "I0512 10:32:07.019665 669 log.go:172] (0xc000c24c60) (0xc000544140) Create stream\nI0512 10:32:07.019728 669 log.go:172] (0xc000c24c60) (0xc000544140) Stream added, broadcasting: 1\nI0512 10:32:07.021809 669 log.go:172] (0xc000c24c60) Reply frame received for 1\nI0512 10:32:07.021830 669 log.go:172] (0xc000c24c60) (0xc000aded20) Create stream\nI0512 10:32:07.021835 669 log.go:172] (0xc000c24c60) (0xc000aded20) Stream added, broadcasting: 3\nI0512 10:32:07.022499 669 log.go:172] (0xc000c24c60) Reply frame received for 3\nI0512 10:32:07.022519 669 log.go:172] (0xc000c24c60) (0xc0005450e0) Create stream\nI0512 10:32:07.022527 669 log.go:172] (0xc000c24c60) (0xc0005450e0) Stream added, broadcasting: 5\nI0512 10:32:07.023173 669 log.go:172] (0xc000c24c60) Reply frame received for 5\nI0512 10:32:07.072117 669 log.go:172] (0xc000c24c60) Data frame received for 5\nI0512 10:32:07.072154 669 log.go:172] (0xc0005450e0) (5) Data frame handling\nI0512 10:32:07.072170 669 log.go:172] (0xc0005450e0) (5) Data frame sent\nI0512 10:32:07.072182 669 log.go:172] (0xc000c24c60) Data frame received for 5\nI0512 10:32:07.072191 669 log.go:172] (0xc0005450e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.190.174 80\nConnection to 10.98.190.174 80 port [tcp/http] succeeded!\nI0512 10:32:07.072212 669 log.go:172] (0xc000c24c60) Data frame received for 3\nI0512 10:32:07.072221 669 log.go:172] (0xc000aded20) (3) Data frame handling\nI0512 10:32:07.073241 669 log.go:172] (0xc000c24c60) Data frame received for 1\nI0512 10:32:07.073292 669 log.go:172] (0xc000544140) (1) Data frame handling\nI0512 10:32:07.073316 669 log.go:172] (0xc000544140) (1) Data frame sent\nI0512 10:32:07.073327 669 log.go:172] (0xc000c24c60) (0xc000544140) Stream removed, broadcasting: 1\nI0512 10:32:07.073339 669 log.go:172] (0xc000c24c60) Go away received\nI0512 10:32:07.073678 669 log.go:172] (0xc000c24c60) (0xc000544140) Stream removed, broadcasting: 1\nI0512 10:32:07.073701 669 log.go:172] (0xc000c24c60) (0xc000aded20) Stream removed, broadcasting: 3\nI0512 10:32:07.073714 669 log.go:172] (0xc000c24c60) (0xc0005450e0) Stream removed, broadcasting: 5\n" May 12 10:32:07.077: INFO: stdout: "" May 12 10:32:07.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5956 execpod-affinityb4zvg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.190.174:80/ ; done' May 12 10:32:07.372: INFO: stderr: "I0512 10:32:07.211467 689 log.go:172] (0xc000b8ea50) (0xc0008b2320) Create stream\nI0512 10:32:07.211521 689 log.go:172] (0xc000b8ea50) (0xc0008b2320) Stream added, broadcasting: 1\nI0512 10:32:07.213369 689 log.go:172] (0xc000b8ea50) Reply frame received for 1\nI0512 10:32:07.213406 689 log.go:172] (0xc000b8ea50) (0xc0008b28c0) Create stream\nI0512 10:32:07.213415 689 log.go:172] (0xc000b8ea50) (0xc0008b28c0) Stream added, broadcasting: 3\nI0512 10:32:07.214162 689 log.go:172] (0xc000b8ea50) Reply frame received for 3\nI0512 10:32:07.214193 689 log.go:172] (0xc000b8ea50) (0xc0008a60a0) Create stream\nI0512 10:32:07.214200 689 log.go:172] (0xc000b8ea50) (0xc0008a60a0) Stream added, broadcasting: 5\nI0512 10:32:07.214862 689 log.go:172] (0xc000b8ea50) Reply frame received for 5\nI0512 10:32:07.272086 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.272127 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.272139 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.272159 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.272168 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.272177 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.276040 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.276070 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.276095 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.276452 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.276467 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.276475 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.276485 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.276491 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.276497 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.282258 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.282270 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.282278 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.282875 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.282888 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.282902 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.282909 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.282918 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.282923 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.289925 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.289937 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.289944 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.290671 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.290706 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.290733 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.290774 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.290792 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.290799 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.295254 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.295283 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.295317 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.295583 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.295594 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.295602 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.295635 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.295658 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.295701 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\nI0512 10:32:07.295727 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.295740 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.295784 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\nI0512 10:32:07.302119 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.302133 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.302139 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.302170 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.302197 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.302215 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.302234 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.302246 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.302296 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ echo\n+ curlI0512 10:32:07.302323 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.302338 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.302360 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.308249 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.308264 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.308275 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.308838 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.308862 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.308876 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.308899 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.308918 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.308933 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.315348 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.315363 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.315374 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.315734 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.315749 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.315775 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.315801 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.315812 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.315821 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.319279 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.319291 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.319302 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.319612 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.319623 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.319629 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\nI0512 10:32:07.319636 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.319641 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.319646 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.323890 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.323904 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.323914 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.324074 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.324085 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.324097 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.324142 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.324152 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.324158 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.328667 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.328685 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.328696 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.329364 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.329376 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.329382 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.329396 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.329414 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.329431 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.334559 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.334573 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.334588 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.335333 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.335356 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.335367 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.335384 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.335397 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.335413 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.341543 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.341560 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.341576 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.341774 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.341800 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.341811 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.341825 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.341834 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.341843 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.347849 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.347864 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.347876 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.348179 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.348195 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.348212 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.348246 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.348257 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.348265 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.352518 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.352541 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.352563 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.353100 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.353261 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.353272 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.353284 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.353291 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.353298 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.357868 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.357882 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.357892 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.358587 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.358610 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.358626 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.358640 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.358664 689 log.go:172] (0xc0008a60a0) (5) Data frame sent\nI0512 10:32:07.358695 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.365025 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.365050 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.365080 689 log.go:172] (0xc0008b28c0) (3) Data frame sent\nI0512 10:32:07.365905 689 log.go:172] (0xc000b8ea50) Data frame received for 3\nI0512 10:32:07.365957 689 log.go:172] (0xc0008b28c0) (3) Data frame handling\nI0512 10:32:07.365977 689 log.go:172] (0xc000b8ea50) Data frame received for 5\nI0512 10:32:07.365991 689 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0512 10:32:07.367538 689 log.go:172] (0xc000b8ea50) Data frame received for 1\nI0512 10:32:07.367557 689 log.go:172] (0xc0008b2320) (1) Data frame handling\nI0512 10:32:07.367571 689 log.go:172] (0xc0008b2320) (1) Data frame sent\nI0512 10:32:07.367584 689 log.go:172] (0xc000b8ea50) (0xc0008b2320) Stream removed, broadcasting: 1\nI0512 10:32:07.367703 689 log.go:172] (0xc000b8ea50) Go away received\nI0512 10:32:07.367827 689 log.go:172] (0xc000b8ea50) (0xc0008b2320) Stream removed, broadcasting: 1\nI0512 10:32:07.367839 689 log.go:172] (0xc000b8ea50) (0xc0008b28c0) Stream removed, broadcasting: 3\nI0512 10:32:07.367847 689 log.go:172] (0xc000b8ea50) (0xc0008a60a0) Stream removed, broadcasting: 5\n" May 12 10:32:07.373: INFO: stdout: "\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-hhqm5\naffinity-clusterip-transition-hhqm5\naffinity-clusterip-transition-hhqm5\naffinity-clusterip-transition-hhqm5\naffinity-clusterip-transition-gv65s\naffinity-clusterip-transition-gv65s\naffinity-clusterip-transition-gv65s\naffinity-clusterip-transition-gv65s\naffinity-clusterip-transition-hhqm5\naffinity-clusterip-transition-gv65s\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-gv65s\naffinity-clusterip-transition-hhqm5\naffinity-clusterip-transition-gv65s\naffinity-clusterip-transition-kcbz5" May 12 10:32:07.373: INFO: Received response from host: May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-hhqm5 May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-hhqm5 May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-hhqm5 May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-hhqm5 May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-gv65s May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-gv65s May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-gv65s May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-gv65s May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-hhqm5 May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-gv65s May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-gv65s May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-hhqm5 May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-gv65s May 12 10:32:07.373: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5956 execpod-affinityb4zvg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.190.174:80/ ; done' May 12 10:32:07.669: INFO: stderr: "I0512 10:32:07.519022 705 log.go:172] (0xc00003a6e0) (0xc0004e9860) Create stream\nI0512 10:32:07.519088 705 log.go:172] (0xc00003a6e0) (0xc0004e9860) Stream added, broadcasting: 1\nI0512 10:32:07.521746 705 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0512 10:32:07.521777 705 log.go:172] (0xc00003a6e0) (0xc0004dcaa0) Create stream\nI0512 10:32:07.521788 705 log.go:172] (0xc00003a6e0) (0xc0004dcaa0) Stream added, broadcasting: 3\nI0512 10:32:07.522828 705 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0512 10:32:07.522856 705 log.go:172] (0xc00003a6e0) (0xc0004e9cc0) Create stream\nI0512 10:32:07.522865 705 log.go:172] (0xc00003a6e0) (0xc0004e9cc0) Stream added, broadcasting: 5\nI0512 10:32:07.523702 705 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0512 10:32:07.581835 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.581859 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.581870 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.581890 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.581900 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.581908 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.584461 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.584485 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.584510 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.584847 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.584876 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.584888 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.584910 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.584918 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.584929 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.588262 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.588293 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.588323 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.588504 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.588519 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.588537 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.588548 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.588556 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.588582 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.593350 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.593379 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.593399 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.593647 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.593681 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.593699 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.593717 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.593727 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.593737 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.596498 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.596513 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.596521 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.597517 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.597533 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.597542 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.597547 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.597597 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.597634 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.604086 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.604110 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.604128 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.604775 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.604809 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.604825 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.604845 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.604861 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.604879 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.609096 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.609215 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.609230 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.609802 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.609812 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.609829 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.609853 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.609866 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\nI0512 10:32:07.609876 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.609886 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.609906 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\nI0512 10:32:07.609923 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.614916 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.614940 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.614976 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.615335 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.615362 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.615378 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.615399 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.615423 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.615445 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.619697 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.619731 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.619764 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.620131 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.620159 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.620173 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.620192 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.620214 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.620243 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\nI0512 10:32:07.620257 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.620268 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.620292 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\nI0512 10:32:07.624180 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.624209 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.624234 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.625007 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.625040 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.625070 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.625088 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.625305 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.625353 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.629499 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.629534 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.629553 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.629676 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.629694 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.629703 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.629886 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.629899 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.629908 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.633632 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.633662 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.633718 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.634031 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.634063 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.634074 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.634088 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.634096 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.634104 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.638056 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.638074 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.638089 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.638635 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.638651 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.638666 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\nI0512 10:32:07.638676 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.638685 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.638695 705 log.go:172] (0xc00003a6e0) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.638707 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.638764 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.638787 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\nI0512 10:32:07.644767 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.644780 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.644787 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.645572 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.645582 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.645598 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.645656 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.645675 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.645686 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.650298 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.650316 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.650332 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.650923 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.650935 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.650948 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.650964 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.650980 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.650999 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\nI0512 10:32:07.656389 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.656413 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.656445 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.656975 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.656991 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.657000 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.657077 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.657102 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.657335 705 log.go:172] (0xc0004e9cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.190.174:80/\nI0512 10:32:07.662343 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.662386 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.662402 705 log.go:172] (0xc0004dcaa0) (3) Data frame sent\nI0512 10:32:07.663016 705 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0512 10:32:07.663047 705 log.go:172] (0xc0004dcaa0) (3) Data frame handling\nI0512 10:32:07.663835 705 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0512 10:32:07.663857 705 log.go:172] (0xc0004e9cc0) (5) Data frame handling\nI0512 10:32:07.664862 705 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0512 10:32:07.664897 705 log.go:172] (0xc0004e9860) (1) Data frame handling\nI0512 10:32:07.664915 705 log.go:172] (0xc0004e9860) (1) Data frame sent\nI0512 10:32:07.664934 705 log.go:172] (0xc00003a6e0) (0xc0004e9860) Stream removed, broadcasting: 1\nI0512 10:32:07.665104 705 log.go:172] (0xc00003a6e0) Go away received\nI0512 10:32:07.665544 705 log.go:172] (0xc00003a6e0) (0xc0004e9860) Stream removed, broadcasting: 1\nI0512 10:32:07.665575 705 log.go:172] (0xc00003a6e0) (0xc0004dcaa0) Stream removed, broadcasting: 3\nI0512 10:32:07.665592 705 log.go:172] (0xc00003a6e0) (0xc0004e9cc0) Stream removed, broadcasting: 5\n" May 12 10:32:07.670: INFO: stdout: "\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5\naffinity-clusterip-transition-kcbz5" May 12 10:32:07.670: INFO: Received response from host: May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Received response from host: affinity-clusterip-transition-kcbz5 May 12 10:32:07.670: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-5956, will wait for the garbage collector to delete the pods May 12 10:32:07.796: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.083976ms May 12 10:32:08.296: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 500.186981ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:32:28.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5956" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:39.338 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":55,"skipped":886,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:32:28.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 12 10:32:29.202: INFO: Waiting up to 5m0s for pod "downward-api-f5d47e3b-384f-44c5-859c-a21699418d45" in namespace "downward-api-5040" to be "Succeeded or Failed" May 12 10:32:29.268: INFO: Pod "downward-api-f5d47e3b-384f-44c5-859c-a21699418d45": Phase="Pending", Reason="", readiness=false. Elapsed: 65.487657ms May 12 10:32:31.661: INFO: Pod "downward-api-f5d47e3b-384f-44c5-859c-a21699418d45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.458437558s May 12 10:32:33.666: INFO: Pod "downward-api-f5d47e3b-384f-44c5-859c-a21699418d45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.463218622s May 12 10:32:35.859: INFO: Pod "downward-api-f5d47e3b-384f-44c5-859c-a21699418d45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.656664036s May 12 10:32:37.921: INFO: Pod "downward-api-f5d47e3b-384f-44c5-859c-a21699418d45": Phase="Pending", Reason="", readiness=false. Elapsed: 8.719112528s May 12 10:32:40.273: INFO: Pod "downward-api-f5d47e3b-384f-44c5-859c-a21699418d45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.07070117s STEP: Saw pod success May 12 10:32:40.273: INFO: Pod "downward-api-f5d47e3b-384f-44c5-859c-a21699418d45" satisfied condition "Succeeded or Failed" May 12 10:32:40.303: INFO: Trying to get logs from node latest-worker pod downward-api-f5d47e3b-384f-44c5-859c-a21699418d45 container dapi-container: STEP: delete the pod May 12 10:32:41.005: INFO: Waiting for pod downward-api-f5d47e3b-384f-44c5-859c-a21699418d45 to disappear May 12 10:32:41.101: INFO: Pod downward-api-f5d47e3b-384f-44c5-859c-a21699418d45 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:32:41.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5040" for this suite. • [SLOW TEST:12.361 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":56,"skipped":952,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:32:41.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 12 10:32:41.653: INFO: >>> kubeConfig: /root/.kube/config May 12 10:32:44.642: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:32:59.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1245" for this suite. • [SLOW TEST:18.468 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":57,"skipped":991,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:32:59.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-01c3092e-0957-41dd-99be-048dd059508c STEP: Creating a pod to test consume configMaps May 12 10:32:59.743: INFO: Waiting up to 5m0s for pod "pod-configmaps-8994aa94-3b4f-451f-8b9a-b75fa30955f0" in namespace "configmap-6562" to be "Succeeded or Failed" May 12 10:32:59.760: INFO: Pod "pod-configmaps-8994aa94-3b4f-451f-8b9a-b75fa30955f0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.350457ms May 12 10:33:01.763: INFO: Pod "pod-configmaps-8994aa94-3b4f-451f-8b9a-b75fa30955f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019537905s May 12 10:33:03.766: INFO: Pod "pod-configmaps-8994aa94-3b4f-451f-8b9a-b75fa30955f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023091163s May 12 10:33:05.883: INFO: Pod "pod-configmaps-8994aa94-3b4f-451f-8b9a-b75fa30955f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.140067935s STEP: Saw pod success May 12 10:33:05.883: INFO: Pod "pod-configmaps-8994aa94-3b4f-451f-8b9a-b75fa30955f0" satisfied condition "Succeeded or Failed" May 12 10:33:05.887: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8994aa94-3b4f-451f-8b9a-b75fa30955f0 container configmap-volume-test: STEP: delete the pod May 12 10:33:06.090: INFO: Waiting for pod pod-configmaps-8994aa94-3b4f-451f-8b9a-b75fa30955f0 to disappear May 12 10:33:06.119: INFO: Pod pod-configmaps-8994aa94-3b4f-451f-8b9a-b75fa30955f0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:33:06.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6562" for this suite. • [SLOW TEST:6.461 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":58,"skipped":995,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:33:06.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 12 10:33:10.290: INFO: &Pod{ObjectMeta:{send-events-61b3b96d-6964-42b4-a008-1111cd424f86 events-3271 /api/v1/namespaces/events-3271/pods/send-events-61b3b96d-6964-42b4-a008-1111cd424f86 2a858e1c-ae42-42b3-89c1-957565ddcc5b 3778380 0 2020-05-12 10:33:06 +0000 UTC map[name:foo time:251426728] map[] [] [] [{e2e.test Update v1 2020-05-12 10:33:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 10:33:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.38\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s4fb8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s4fb8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s4fb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 10:33:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 10:33:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 10:33:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 10:33:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.38,StartTime:2020-05-12 10:33:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 10:33:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://cd85a740f2fed102c00e477f28a315406a52cc92de34798a22167b80398e571e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 12 10:33:12.296: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 12 10:33:14.300: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:33:14.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3271" for this suite. • [SLOW TEST:8.244 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":59,"skipped":1005,"failed":0} SS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:33:14.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2408 STEP: creating service affinity-nodeport-transition in namespace services-2408 STEP: creating replication controller affinity-nodeport-transition in namespace services-2408 I0512 10:33:15.107764 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-2408, replica count: 3 I0512 10:33:18.158123 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:33:21.158282 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 10:33:21.170: INFO: Creating new exec pod May 12 10:33:26.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2408 execpod-affinityghptz -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 12 10:33:26.595: INFO: stderr: "I0512 10:33:26.507481 720 log.go:172] (0xc000afe210) (0xc0004b14a0) Create stream\nI0512 10:33:26.507529 720 log.go:172] (0xc000afe210) (0xc0004b14a0) Stream added, broadcasting: 1\nI0512 10:33:26.510120 720 log.go:172] (0xc000afe210) Reply frame received for 1\nI0512 10:33:26.510159 720 log.go:172] (0xc000afe210) (0xc0003c2460) Create stream\nI0512 10:33:26.510176 720 log.go:172] (0xc000afe210) (0xc0003c2460) Stream added, broadcasting: 3\nI0512 10:33:26.510985 720 log.go:172] (0xc000afe210) Reply frame received for 3\nI0512 10:33:26.511013 720 log.go:172] (0xc000afe210) (0xc0003c2dc0) Create stream\nI0512 10:33:26.511023 720 log.go:172] (0xc000afe210) (0xc0003c2dc0) Stream added, broadcasting: 5\nI0512 10:33:26.511866 720 log.go:172] (0xc000afe210) Reply frame received for 5\nI0512 10:33:26.586340 720 log.go:172] (0xc000afe210) Data frame received for 5\nI0512 10:33:26.586393 720 log.go:172] (0xc0003c2dc0) (5) Data frame handling\nI0512 10:33:26.586423 720 log.go:172] (0xc0003c2dc0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0512 10:33:26.586726 720 log.go:172] (0xc000afe210) Data frame received for 5\nI0512 10:33:26.586765 720 log.go:172] (0xc0003c2dc0) (5) Data frame handling\nI0512 10:33:26.586787 720 log.go:172] (0xc0003c2dc0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0512 10:33:26.587041 720 log.go:172] (0xc000afe210) Data frame received for 3\nI0512 10:33:26.587070 720 log.go:172] (0xc0003c2460) (3) Data frame handling\nI0512 10:33:26.587290 720 log.go:172] (0xc000afe210) Data frame received for 5\nI0512 10:33:26.587301 720 log.go:172] (0xc0003c2dc0) (5) Data frame handling\nI0512 10:33:26.589414 720 log.go:172] (0xc000afe210) Data frame received for 1\nI0512 10:33:26.589452 720 log.go:172] (0xc0004b14a0) (1) Data frame handling\nI0512 10:33:26.589489 720 log.go:172] (0xc0004b14a0) (1) Data frame sent\nI0512 10:33:26.589514 720 log.go:172] (0xc000afe210) (0xc0004b14a0) Stream removed, broadcasting: 1\nI0512 10:33:26.589545 720 log.go:172] (0xc000afe210) Go away received\nI0512 10:33:26.589994 720 log.go:172] (0xc000afe210) (0xc0004b14a0) Stream removed, broadcasting: 1\nI0512 10:33:26.590016 720 log.go:172] (0xc000afe210) (0xc0003c2460) Stream removed, broadcasting: 3\nI0512 10:33:26.590031 720 log.go:172] (0xc000afe210) (0xc0003c2dc0) Stream removed, broadcasting: 5\n" May 12 10:33:26.595: INFO: stdout: "" May 12 10:33:26.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2408 execpod-affinityghptz -- /bin/sh -x -c nc -zv -t -w 2 10.100.223.232 80' May 12 10:33:26.820: INFO: stderr: "I0512 10:33:26.733775 740 log.go:172] (0xc000aa1970) (0xc00084c500) Create stream\nI0512 10:33:26.733831 740 log.go:172] (0xc000aa1970) (0xc00084c500) Stream added, broadcasting: 1\nI0512 10:33:26.736301 740 log.go:172] (0xc000aa1970) Reply frame received for 1\nI0512 10:33:26.736356 740 log.go:172] (0xc000aa1970) (0xc00043fcc0) Create stream\nI0512 10:33:26.736382 740 log.go:172] (0xc000aa1970) (0xc00043fcc0) Stream added, broadcasting: 3\nI0512 10:33:26.739257 740 log.go:172] (0xc000aa1970) Reply frame received for 3\nI0512 10:33:26.739345 740 log.go:172] (0xc000aa1970) (0xc00015dc20) Create stream\nI0512 10:33:26.739374 740 log.go:172] (0xc000aa1970) (0xc00015dc20) Stream added, broadcasting: 5\nI0512 10:33:26.740651 740 log.go:172] (0xc000aa1970) Reply frame received for 5\nI0512 10:33:26.813577 740 log.go:172] (0xc000aa1970) Data frame received for 5\nI0512 10:33:26.813619 740 log.go:172] (0xc00015dc20) (5) Data frame handling\nI0512 10:33:26.813634 740 log.go:172] (0xc00015dc20) (5) Data frame sent\n+ nc -zv -t -w 2 10.100.223.232 80\nConnection to 10.100.223.232 80 port [tcp/http] succeeded!\nI0512 10:33:26.813649 740 log.go:172] (0xc000aa1970) Data frame received for 3\nI0512 10:33:26.813657 740 log.go:172] (0xc00043fcc0) (3) Data frame handling\nI0512 10:33:26.813733 740 log.go:172] (0xc000aa1970) Data frame received for 5\nI0512 10:33:26.813760 740 log.go:172] (0xc00015dc20) (5) Data frame handling\nI0512 10:33:26.815132 740 log.go:172] (0xc000aa1970) Data frame received for 1\nI0512 10:33:26.815164 740 log.go:172] (0xc00084c500) (1) Data frame handling\nI0512 10:33:26.815188 740 log.go:172] (0xc00084c500) (1) Data frame sent\nI0512 10:33:26.815468 740 log.go:172] (0xc000aa1970) (0xc00084c500) Stream removed, broadcasting: 1\nI0512 10:33:26.815489 740 log.go:172] (0xc000aa1970) Go away received\nI0512 10:33:26.815879 740 log.go:172] (0xc000aa1970) (0xc00084c500) Stream removed, broadcasting: 1\nI0512 10:33:26.815905 740 log.go:172] (0xc000aa1970) (0xc00043fcc0) Stream removed, broadcasting: 3\nI0512 10:33:26.815918 740 log.go:172] (0xc000aa1970) (0xc00015dc20) Stream removed, broadcasting: 5\n" May 12 10:33:26.820: INFO: stdout: "" May 12 10:33:26.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2408 execpod-affinityghptz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31404' May 12 10:33:27.032: INFO: stderr: "I0512 10:33:26.941657 762 log.go:172] (0xc000c0e9a0) (0xc000ba0460) Create stream\nI0512 10:33:26.941714 762 log.go:172] (0xc000c0e9a0) (0xc000ba0460) Stream added, broadcasting: 1\nI0512 10:33:26.944040 762 log.go:172] (0xc000c0e9a0) Reply frame received for 1\nI0512 10:33:26.944077 762 log.go:172] (0xc000c0e9a0) (0xc000ba0500) Create stream\nI0512 10:33:26.944086 762 log.go:172] (0xc000c0e9a0) (0xc000ba0500) Stream added, broadcasting: 3\nI0512 10:33:26.944820 762 log.go:172] (0xc000c0e9a0) Reply frame received for 3\nI0512 10:33:26.944844 762 log.go:172] (0xc000c0e9a0) (0xc000ba05a0) Create stream\nI0512 10:33:26.944852 762 log.go:172] (0xc000c0e9a0) (0xc000ba05a0) Stream added, broadcasting: 5\nI0512 10:33:26.945798 762 log.go:172] (0xc000c0e9a0) Reply frame received for 5\nI0512 10:33:27.020137 762 log.go:172] (0xc000c0e9a0) Data frame received for 3\nI0512 10:33:27.020174 762 log.go:172] (0xc000ba0500) (3) Data frame handling\nI0512 10:33:27.020198 762 log.go:172] (0xc000c0e9a0) Data frame received for 5\nI0512 10:33:27.020217 762 log.go:172] (0xc000ba05a0) (5) Data frame handling\nI0512 10:33:27.020235 762 log.go:172] (0xc000ba05a0) (5) Data frame sent\nI0512 10:33:27.020243 762 log.go:172] (0xc000c0e9a0) Data frame received for 5\nI0512 10:33:27.020249 762 log.go:172] (0xc000ba05a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31404\nConnection to 172.17.0.13 31404 port [tcp/31404] succeeded!\nI0512 10:33:27.022124 762 log.go:172] (0xc000c0e9a0) Data frame received for 1\nI0512 10:33:27.022155 762 log.go:172] (0xc000ba0460) (1) Data frame handling\nI0512 10:33:27.022171 762 log.go:172] (0xc000ba0460) (1) Data frame sent\nI0512 10:33:27.022192 762 log.go:172] (0xc000c0e9a0) (0xc000ba0460) Stream removed, broadcasting: 1\nI0512 10:33:27.022217 762 log.go:172] (0xc000c0e9a0) Go away received\nI0512 10:33:27.022840 762 log.go:172] (0xc000c0e9a0) (0xc000ba0460) Stream removed, broadcasting: 1\nI0512 10:33:27.022860 762 log.go:172] (0xc000c0e9a0) (0xc000ba0500) Stream removed, broadcasting: 3\nI0512 10:33:27.022871 762 log.go:172] (0xc000c0e9a0) (0xc000ba05a0) Stream removed, broadcasting: 5\n" May 12 10:33:27.032: INFO: stdout: "" May 12 10:33:27.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2408 execpod-affinityghptz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31404' May 12 10:33:27.231: INFO: stderr: "I0512 10:33:27.165834 781 log.go:172] (0xc000a3a000) (0xc0004b4e60) Create stream\nI0512 10:33:27.165896 781 log.go:172] (0xc000a3a000) (0xc0004b4e60) Stream added, broadcasting: 1\nI0512 10:33:27.167910 781 log.go:172] (0xc000a3a000) Reply frame received for 1\nI0512 10:33:27.167940 781 log.go:172] (0xc000a3a000) (0xc0004b55e0) Create stream\nI0512 10:33:27.167952 781 log.go:172] (0xc000a3a000) (0xc0004b55e0) Stream added, broadcasting: 3\nI0512 10:33:27.168773 781 log.go:172] (0xc000a3a000) Reply frame received for 3\nI0512 10:33:27.168794 781 log.go:172] (0xc000a3a000) (0xc00067a3c0) Create stream\nI0512 10:33:27.168799 781 log.go:172] (0xc000a3a000) (0xc00067a3c0) Stream added, broadcasting: 5\nI0512 10:33:27.169518 781 log.go:172] (0xc000a3a000) Reply frame received for 5\nI0512 10:33:27.226637 781 log.go:172] (0xc000a3a000) Data frame received for 3\nI0512 10:33:27.226671 781 log.go:172] (0xc0004b55e0) (3) Data frame handling\nI0512 10:33:27.226690 781 log.go:172] (0xc000a3a000) Data frame received for 5\nI0512 10:33:27.226700 781 log.go:172] (0xc00067a3c0) (5) Data frame handling\nI0512 10:33:27.226712 781 log.go:172] (0xc00067a3c0) (5) Data frame sent\nI0512 10:33:27.226725 781 log.go:172] (0xc000a3a000) Data frame received for 5\nI0512 10:33:27.226743 781 log.go:172] (0xc00067a3c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31404\nConnection to 172.17.0.12 31404 port [tcp/31404] succeeded!\nI0512 10:33:27.227774 781 log.go:172] (0xc000a3a000) Data frame received for 1\nI0512 10:33:27.227792 781 log.go:172] (0xc0004b4e60) (1) Data frame handling\nI0512 10:33:27.227813 781 log.go:172] (0xc0004b4e60) (1) Data frame sent\nI0512 10:33:27.227826 781 log.go:172] (0xc000a3a000) (0xc0004b4e60) Stream removed, broadcasting: 1\nI0512 10:33:27.227922 781 log.go:172] (0xc000a3a000) Go away received\nI0512 10:33:27.228123 781 log.go:172] (0xc000a3a000) (0xc0004b4e60) Stream removed, broadcasting: 1\nI0512 10:33:27.228136 781 log.go:172] (0xc000a3a000) (0xc0004b55e0) Stream removed, broadcasting: 3\nI0512 10:33:27.228142 781 log.go:172] (0xc000a3a000) (0xc00067a3c0) Stream removed, broadcasting: 5\n" May 12 10:33:27.231: INFO: stdout: "" May 12 10:33:27.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2408 execpod-affinityghptz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31404/ ; done' May 12 10:33:28.348: INFO: stderr: "I0512 10:33:28.173832 801 log.go:172] (0xc000a069a0) (0xc00066c1e0) Create stream\nI0512 10:33:28.173897 801 log.go:172] (0xc000a069a0) (0xc00066c1e0) Stream added, broadcasting: 1\nI0512 10:33:28.175426 801 log.go:172] (0xc000a069a0) Reply frame received for 1\nI0512 10:33:28.175474 801 log.go:172] (0xc000a069a0) (0xc000666b40) Create stream\nI0512 10:33:28.175487 801 log.go:172] (0xc000a069a0) (0xc000666b40) Stream added, broadcasting: 3\nI0512 10:33:28.176389 801 log.go:172] (0xc000a069a0) Reply frame received for 3\nI0512 10:33:28.176419 801 log.go:172] (0xc000a069a0) (0xc00064a820) Create stream\nI0512 10:33:28.176432 801 log.go:172] (0xc000a069a0) (0xc00064a820) Stream added, broadcasting: 5\nI0512 10:33:28.177573 801 log.go:172] (0xc000a069a0) Reply frame received for 5\nI0512 10:33:28.246422 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.246481 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.246500 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.246531 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.246543 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.246560 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.252562 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.252689 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.252783 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.252931 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.252953 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.252961 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.252981 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.253008 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.253037 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.257901 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.257926 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.257949 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.258596 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.258610 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.258618 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.258637 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.258654 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.258668 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.262342 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.262360 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.262375 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.262892 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.262912 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.262929 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.262940 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.262974 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.262992 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.267162 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.267182 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.267210 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.267636 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.267660 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.267670 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.267680 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.267686 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.267691 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.273314 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.273332 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.273343 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.273924 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.273945 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.273967 801 log.go:172] (0xc00064a820) (5) Data frame sent\nI0512 10:33:28.273987 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.274001 801 log.go:172] (0xc00064a820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.274036 801 log.go:172] (0xc00064a820) (5) Data frame sent\nI0512 10:33:28.274161 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.274179 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.274195 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.277824 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.277845 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.277862 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.278295 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.278313 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.278324 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.278341 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.278351 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.278362 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.282752 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.282769 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.282782 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.283184 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.283273 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.283285 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.283316 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.283352 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.283368 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.286711 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.286727 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.286743 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.287049 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.287058 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.287066 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.287076 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.287084 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.287092 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.291055 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.291067 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.291078 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.291508 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.291524 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.291530 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.291538 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.291543 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.291547 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.295301 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.295317 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.295333 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.295953 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.295963 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.295977 801 log.go:172] (0xc00064a820) (5) Data frame sent\nI0512 10:33:28.295983 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.295987 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.295991 801 log.go:172] (0xc000666b40) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.302824 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.302855 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.302881 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.303801 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.303815 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.303827 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ I0512 10:33:28.304937 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.304951 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.304958 801 log.go:172] (0xc00064a820) (5) Data frame sent\necho\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.304984 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.305019 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.305042 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.309807 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.309827 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.309837 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.310154 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.310164 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.310170 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.310179 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.310183 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.310188 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.314671 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.314682 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.314688 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.315280 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.315306 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.315320 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.315337 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.315349 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.315360 801 log.go:172] (0xc00064a820) (5) Data frame sent\nI0512 10:33:28.315372 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.315383 801 log.go:172] (0xc00064a820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.315402 801 log.go:172] (0xc00064a820) (5) Data frame sent\nI0512 10:33:28.320745 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.320761 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.320775 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.321088 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.321103 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.321246 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.321260 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.321266 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.321282 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.324665 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.324682 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.324700 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.325033 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.325053 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.325076 801 log.go:172] (0xc00064a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:28.325301 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.325327 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.325351 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.328834 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.328845 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.328852 801 log.go:172] (0xc000666b40) (3) Data frame sent\nI0512 10:33:28.329560 801 log.go:172] (0xc000a069a0) Data frame received for 5\nI0512 10:33:28.329585 801 log.go:172] (0xc00064a820) (5) Data frame handling\nI0512 10:33:28.329606 801 log.go:172] (0xc000a069a0) Data frame received for 3\nI0512 10:33:28.329619 801 log.go:172] (0xc000666b40) (3) Data frame handling\nI0512 10:33:28.344721 801 log.go:172] (0xc000a069a0) Data frame received for 1\nI0512 10:33:28.344760 801 log.go:172] (0xc00066c1e0) (1) Data frame handling\nI0512 10:33:28.344801 801 log.go:172] (0xc00066c1e0) (1) Data frame sent\nI0512 10:33:28.344829 801 log.go:172] (0xc000a069a0) (0xc00066c1e0) Stream removed, broadcasting: 1\nI0512 10:33:28.344854 801 log.go:172] (0xc000a069a0) Go away received\nI0512 10:33:28.345234 801 log.go:172] (0xc000a069a0) (0xc00066c1e0) Stream removed, broadcasting: 1\nI0512 10:33:28.345284 801 log.go:172] (0xc000a069a0) (0xc000666b40) Stream removed, broadcasting: 3\nI0512 10:33:28.345289 801 log.go:172] (0xc000a069a0) (0xc00064a820) Stream removed, broadcasting: 5\n" May 12 10:33:28.349: INFO: stdout: "\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-jvf7t\naffinity-nodeport-transition-ccvlh\naffinity-nodeport-transition-ccvlh\naffinity-nodeport-transition-ccvlh\naffinity-nodeport-transition-jvf7t\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-ccvlh\naffinity-nodeport-transition-ccvlh\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-jvf7t\naffinity-nodeport-transition-ccvlh\naffinity-nodeport-transition-r7pch" May 12 10:33:28.349: INFO: Received response from host: May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-jvf7t May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-ccvlh May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-ccvlh May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-ccvlh May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-jvf7t May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-ccvlh May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-ccvlh May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-jvf7t May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-ccvlh May 12 10:33:28.349: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:28.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2408 execpod-affinityghptz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31404/ ; done' May 12 10:33:29.403: INFO: stderr: "I0512 10:33:29.260456 819 log.go:172] (0xc0000eca50) (0xc000590280) Create stream\nI0512 10:33:29.260507 819 log.go:172] (0xc0000eca50) (0xc000590280) Stream added, broadcasting: 1\nI0512 10:33:29.262578 819 log.go:172] (0xc0000eca50) Reply frame received for 1\nI0512 10:33:29.262620 819 log.go:172] (0xc0000eca50) (0xc00025a1e0) Create stream\nI0512 10:33:29.262635 819 log.go:172] (0xc0000eca50) (0xc00025a1e0) Stream added, broadcasting: 3\nI0512 10:33:29.263459 819 log.go:172] (0xc0000eca50) Reply frame received for 3\nI0512 10:33:29.263481 819 log.go:172] (0xc0000eca50) (0xc0005ccaa0) Create stream\nI0512 10:33:29.263487 819 log.go:172] (0xc0000eca50) (0xc0005ccaa0) Stream added, broadcasting: 5\nI0512 10:33:29.264239 819 log.go:172] (0xc0000eca50) Reply frame received for 5\nI0512 10:33:29.317057 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.317077 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.317083 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.317093 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.317097 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.317226 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.320119 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.320134 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.320147 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.320470 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.320483 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.320492 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.320508 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.320514 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.320524 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.324657 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.324682 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.324705 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.325431 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.325449 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.325460 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.325482 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.325491 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.325500 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.329725 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.329738 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.329749 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.330145 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.330165 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.330176 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.330212 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.330229 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.330235 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.334227 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.334243 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.334256 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.334656 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.334671 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.334678 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.334686 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.334692 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.334700 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.338071 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.338091 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.338109 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.338367 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.338394 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.338403 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.338410 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.338419 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.338431 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.342739 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.342762 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.342779 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.343167 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.343188 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.343196 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.343217 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.343235 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.343245 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.348703 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.348730 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.348748 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.349596 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.349607 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.349615 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.349624 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.349636 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.349650 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.353442 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.353461 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.353476 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.353884 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.353895 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.353901 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.353910 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.353914 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.353919 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.360527 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.360553 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.360576 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.361285 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.361315 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.361327 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.361637 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.361653 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.361670 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.365978 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.366000 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.366023 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.366547 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.366574 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.366584 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.366601 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.366607 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.366616 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.369902 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.369924 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.369938 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.370329 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.370368 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.370387 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\nI0512 10:33:29.370400 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.370413 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0512 10:33:29.370427 819 log.go:172] (0xc0000eca50) Data frame received for 3\n http://172.17.0.13:31404/\nI0512 10:33:29.370445 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.370463 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.370479 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\nI0512 10:33:29.377499 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.377537 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.377574 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.377602 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.377617 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.377635 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.380875 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.380899 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.380911 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.381839 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.381861 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.381898 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.381930 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.382002 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.382065 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.388753 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.388775 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.388792 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.388805 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.388816 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.388847 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.389122 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.389256 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.389292 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.392663 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.392677 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.392696 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.393036 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.393054 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.393060 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.393068 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.393073 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.393079 819 log.go:172] (0xc0005ccaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31404/\nI0512 10:33:29.397470 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.397483 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.397493 819 log.go:172] (0xc00025a1e0) (3) Data frame sent\nI0512 10:33:29.397952 819 log.go:172] (0xc0000eca50) Data frame received for 5\nI0512 10:33:29.397972 819 log.go:172] (0xc0005ccaa0) (5) Data frame handling\nI0512 10:33:29.397992 819 log.go:172] (0xc0000eca50) Data frame received for 3\nI0512 10:33:29.398003 819 log.go:172] (0xc00025a1e0) (3) Data frame handling\nI0512 10:33:29.399227 819 log.go:172] (0xc0000eca50) Data frame received for 1\nI0512 10:33:29.399240 819 log.go:172] (0xc000590280) (1) Data frame handling\nI0512 10:33:29.399249 819 log.go:172] (0xc000590280) (1) Data frame sent\nI0512 10:33:29.399258 819 log.go:172] (0xc0000eca50) (0xc000590280) Stream removed, broadcasting: 1\nI0512 10:33:29.399275 819 log.go:172] (0xc0000eca50) Go away received\nI0512 10:33:29.399667 819 log.go:172] (0xc0000eca50) (0xc000590280) Stream removed, broadcasting: 1\nI0512 10:33:29.399690 819 log.go:172] (0xc0000eca50) (0xc00025a1e0) Stream removed, broadcasting: 3\nI0512 10:33:29.399700 819 log.go:172] (0xc0000eca50) (0xc0005ccaa0) Stream removed, broadcasting: 5\n" May 12 10:33:29.404: INFO: stdout: "\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch\naffinity-nodeport-transition-r7pch" May 12 10:33:29.404: INFO: Received response from host: May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Received response from host: affinity-nodeport-transition-r7pch May 12 10:33:29.404: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-2408, will wait for the garbage collector to delete the pods May 12 10:33:31.635: INFO: Deleting ReplicationController affinity-nodeport-transition took: 413.302669ms May 12 10:33:32.835: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 1.200233908s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:33:48.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2408" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:35.287 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":60,"skipped":1007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:33:49.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-b483273a-408b-4a0e-9664-5fb7f8784657 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:34:08.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2336" for this suite. • [SLOW TEST:18.646 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":61,"skipped":1042,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:34:08.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 12 10:34:08.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1096' May 12 10:34:43.318: INFO: stderr: "" May 12 10:34:43.318: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 10:34:43.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1096' May 12 10:34:44.095: INFO: stderr: "" May 12 10:34:44.095: INFO: stdout: "update-demo-nautilus-nrxjg update-demo-nautilus-zmvl9 " May 12 10:34:44.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nrxjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:34:45.044: INFO: stderr: "" May 12 10:34:45.044: INFO: stdout: "" May 12 10:34:45.044: INFO: update-demo-nautilus-nrxjg is created but not running May 12 10:34:50.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1096' May 12 10:34:50.155: INFO: stderr: "" May 12 10:34:50.155: INFO: stdout: "update-demo-nautilus-nrxjg update-demo-nautilus-zmvl9 " May 12 10:34:50.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nrxjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:34:50.242: INFO: stderr: "" May 12 10:34:50.242: INFO: stdout: "" May 12 10:34:50.243: INFO: update-demo-nautilus-nrxjg is created but not running May 12 10:34:55.243: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1096' May 12 10:34:55.754: INFO: stderr: "" May 12 10:34:55.754: INFO: stdout: "update-demo-nautilus-nrxjg update-demo-nautilus-zmvl9 " May 12 10:34:55.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nrxjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:34:56.635: INFO: stderr: "" May 12 10:34:56.635: INFO: stdout: "true" May 12 10:34:56.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nrxjg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:34:56.732: INFO: stderr: "" May 12 10:34:56.732: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:34:56.732: INFO: validating pod update-demo-nautilus-nrxjg May 12 10:34:56.932: INFO: got data: { "image": "nautilus.jpg" } May 12 10:34:56.933: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:34:56.933: INFO: update-demo-nautilus-nrxjg is verified up and running May 12 10:34:56.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zmvl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:34:57.231: INFO: stderr: "" May 12 10:34:57.231: INFO: stdout: "true" May 12 10:34:57.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zmvl9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:34:57.435: INFO: stderr: "" May 12 10:34:57.435: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:34:57.435: INFO: validating pod update-demo-nautilus-zmvl9 May 12 10:34:57.511: INFO: got data: { "image": "nautilus.jpg" } May 12 10:34:57.511: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:34:57.511: INFO: update-demo-nautilus-zmvl9 is verified up and running STEP: scaling down the replication controller May 12 10:34:57.514: INFO: scanned /root for discovery docs: May 12 10:34:57.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1096' May 12 10:34:59.529: INFO: stderr: "" May 12 10:34:59.529: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 10:34:59.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1096' May 12 10:35:00.119: INFO: stderr: "" May 12 10:35:00.119: INFO: stdout: "update-demo-nautilus-nrxjg update-demo-nautilus-zmvl9 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 12 10:35:05.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1096' May 12 10:35:05.567: INFO: stderr: "" May 12 10:35:05.567: INFO: stdout: "update-demo-nautilus-zmvl9 " May 12 10:35:05.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zmvl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:35:05.688: INFO: stderr: "" May 12 10:35:05.688: INFO: stdout: "true" May 12 10:35:05.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zmvl9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:35:06.041: INFO: stderr: "" May 12 10:35:06.041: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:35:06.041: INFO: validating pod update-demo-nautilus-zmvl9 May 12 10:35:06.044: INFO: got data: { "image": "nautilus.jpg" } May 12 10:35:06.044: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:35:06.044: INFO: update-demo-nautilus-zmvl9 is verified up and running STEP: scaling up the replication controller May 12 10:35:06.045: INFO: scanned /root for discovery docs: May 12 10:35:06.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1096' May 12 10:35:07.243: INFO: stderr: "" May 12 10:35:07.243: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 10:35:07.243: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1096' May 12 10:35:07.344: INFO: stderr: "" May 12 10:35:07.344: INFO: stdout: "update-demo-nautilus-c2rts update-demo-nautilus-zmvl9 " May 12 10:35:07.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c2rts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:35:07.433: INFO: stderr: "" May 12 10:35:07.433: INFO: stdout: "" May 12 10:35:07.433: INFO: update-demo-nautilus-c2rts is created but not running May 12 10:35:12.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1096' May 12 10:35:12.597: INFO: stderr: "" May 12 10:35:12.597: INFO: stdout: "update-demo-nautilus-c2rts update-demo-nautilus-zmvl9 " May 12 10:35:12.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c2rts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:35:13.434: INFO: stderr: "" May 12 10:35:13.434: INFO: stdout: "" May 12 10:35:13.434: INFO: update-demo-nautilus-c2rts is created but not running May 12 10:35:18.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1096' May 12 10:35:18.545: INFO: stderr: "" May 12 10:35:18.545: INFO: stdout: "update-demo-nautilus-c2rts update-demo-nautilus-zmvl9 " May 12 10:35:18.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c2rts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:35:18.937: INFO: stderr: "" May 12 10:35:18.938: INFO: stdout: "true" May 12 10:35:18.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c2rts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:35:19.335: INFO: stderr: "" May 12 10:35:19.335: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:35:19.335: INFO: validating pod update-demo-nautilus-c2rts May 12 10:35:19.356: INFO: got data: { "image": "nautilus.jpg" } May 12 10:35:19.357: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:35:19.357: INFO: update-demo-nautilus-c2rts is verified up and running May 12 10:35:19.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zmvl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:35:19.534: INFO: stderr: "" May 12 10:35:19.534: INFO: stdout: "true" May 12 10:35:19.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zmvl9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1096' May 12 10:35:19.648: INFO: stderr: "" May 12 10:35:19.648: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:35:19.648: INFO: validating pod update-demo-nautilus-zmvl9 May 12 10:35:19.651: INFO: got data: { "image": "nautilus.jpg" } May 12 10:35:19.652: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:35:19.652: INFO: update-demo-nautilus-zmvl9 is verified up and running STEP: using delete to clean up resources May 12 10:35:19.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1096' May 12 10:35:19.786: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:35:19.786: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 10:35:19.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1096' May 12 10:35:19.909: INFO: stderr: "No resources found in kubectl-1096 namespace.\n" May 12 10:35:19.909: INFO: stdout: "" May 12 10:35:19.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1096 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 10:35:20.078: INFO: stderr: "" May 12 10:35:20.078: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:35:20.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1096" for this suite. • [SLOW TEST:71.763 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":62,"skipped":1065,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:35:20.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6997 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6997 STEP: Creating statefulset with conflicting port in namespace statefulset-6997 STEP: Waiting until pod test-pod will start running in namespace statefulset-6997 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6997 May 12 10:35:30.905: INFO: Observed stateful pod in namespace: statefulset-6997, name: ss-0, uid: e1f20a9c-66af-4ada-a7c3-ab8312597489, status phase: Pending. Waiting for statefulset controller to delete. May 12 10:35:31.074: INFO: Observed stateful pod in namespace: statefulset-6997, name: ss-0, uid: e1f20a9c-66af-4ada-a7c3-ab8312597489, status phase: Failed. Waiting for statefulset controller to delete. May 12 10:35:31.093: INFO: Observed stateful pod in namespace: statefulset-6997, name: ss-0, uid: e1f20a9c-66af-4ada-a7c3-ab8312597489, status phase: Failed. Waiting for statefulset controller to delete. May 12 10:35:31.304: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6997 STEP: Removing pod with conflicting port in namespace statefulset-6997 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6997 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 12 10:35:40.918: INFO: Deleting all statefulset in ns statefulset-6997 May 12 10:35:40.921: INFO: Scaling statefulset ss to 0 May 12 10:36:01.056: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:36:01.059: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:36:01.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6997" for this suite. • [SLOW TEST:41.192 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":63,"skipped":1078,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:36:01.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:38:02.639: INFO: Deleting pod "var-expansion-86e6aad9-71a6-4954-82da-0a6c7c5b3b34" in namespace "var-expansion-639" May 12 10:38:02.947: INFO: Wait up to 5m0s for pod "var-expansion-86e6aad9-71a6-4954-82da-0a6c7c5b3b34" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:38:08.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-639" for this suite. • [SLOW TEST:127.026 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":64,"skipped":1080,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:38:08.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-e55bbbdb-8126-4903-9498-de29baba8a7a STEP: Creating a pod to test consume configMaps May 12 10:38:10.121: INFO: Waiting up to 5m0s for pod "pod-configmaps-8b84bab6-97c3-4b7e-b91d-ac1cede30328" in namespace "configmap-6261" to be "Succeeded or Failed" May 12 10:38:10.322: INFO: Pod "pod-configmaps-8b84bab6-97c3-4b7e-b91d-ac1cede30328": Phase="Pending", Reason="", readiness=false. Elapsed: 201.700313ms May 12 10:38:12.325: INFO: Pod "pod-configmaps-8b84bab6-97c3-4b7e-b91d-ac1cede30328": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204821983s May 12 10:38:14.351: INFO: Pod "pod-configmaps-8b84bab6-97c3-4b7e-b91d-ac1cede30328": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230343938s May 12 10:38:16.429: INFO: Pod "pod-configmaps-8b84bab6-97c3-4b7e-b91d-ac1cede30328": Phase="Pending", Reason="", readiness=false. Elapsed: 6.308561763s May 12 10:38:18.432: INFO: Pod "pod-configmaps-8b84bab6-97c3-4b7e-b91d-ac1cede30328": Phase="Pending", Reason="", readiness=false. Elapsed: 8.311784973s May 12 10:38:20.508: INFO: Pod "pod-configmaps-8b84bab6-97c3-4b7e-b91d-ac1cede30328": Phase="Pending", Reason="", readiness=false. Elapsed: 10.387651649s May 12 10:38:22.675: INFO: Pod "pod-configmaps-8b84bab6-97c3-4b7e-b91d-ac1cede30328": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.554742013s STEP: Saw pod success May 12 10:38:22.675: INFO: Pod "pod-configmaps-8b84bab6-97c3-4b7e-b91d-ac1cede30328" satisfied condition "Succeeded or Failed" May 12 10:38:22.678: INFO: Trying to get logs from node latest-worker pod pod-configmaps-8b84bab6-97c3-4b7e-b91d-ac1cede30328 container configmap-volume-test: STEP: delete the pod May 12 10:38:23.552: INFO: Waiting for pod pod-configmaps-8b84bab6-97c3-4b7e-b91d-ac1cede30328 to disappear May 12 10:38:23.566: INFO: Pod pod-configmaps-8b84bab6-97c3-4b7e-b91d-ac1cede30328 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:38:23.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6261" for this suite. • [SLOW TEST:15.270 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":1097,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:38:23.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 12 10:38:24.589: INFO: Waiting up to 5m0s for pod "downward-api-4cb8db40-622d-447a-96c6-81f7ad041d8d" in namespace "downward-api-4852" to be "Succeeded or Failed" May 12 10:38:24.597: INFO: Pod "downward-api-4cb8db40-622d-447a-96c6-81f7ad041d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182718ms May 12 10:38:26.601: INFO: Pod "downward-api-4cb8db40-622d-447a-96c6-81f7ad041d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011774053s May 12 10:38:28.634: INFO: Pod "downward-api-4cb8db40-622d-447a-96c6-81f7ad041d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044446811s May 12 10:38:30.903: INFO: Pod "downward-api-4cb8db40-622d-447a-96c6-81f7ad041d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.313552493s May 12 10:38:32.906: INFO: Pod "downward-api-4cb8db40-622d-447a-96c6-81f7ad041d8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.317060261s STEP: Saw pod success May 12 10:38:32.906: INFO: Pod "downward-api-4cb8db40-622d-447a-96c6-81f7ad041d8d" satisfied condition "Succeeded or Failed" May 12 10:38:32.909: INFO: Trying to get logs from node latest-worker2 pod downward-api-4cb8db40-622d-447a-96c6-81f7ad041d8d container dapi-container: STEP: delete the pod May 12 10:38:33.066: INFO: Waiting for pod downward-api-4cb8db40-622d-447a-96c6-81f7ad041d8d to disappear May 12 10:38:33.089: INFO: Pod downward-api-4cb8db40-622d-447a-96c6-81f7ad041d8d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:38:33.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4852" for this suite. • [SLOW TEST:9.559 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":66,"skipped":1107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:38:33.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:38:33.779: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-615 I0512 10:38:33.800421 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-615, replica count: 1 I0512 10:38:34.850761 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:38:35.850941 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:38:36.851125 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:38:37.851355 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:38:38.851560 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:38:39.851752 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 10:38:40.346: INFO: Created: latency-svc-xgwhz May 12 10:38:40.462: INFO: Got endpoints: latency-svc-xgwhz [510.351667ms] May 12 10:38:40.616: INFO: Created: latency-svc-ztt55 May 12 10:38:40.619: INFO: Got endpoints: latency-svc-ztt55 [157.287036ms] May 12 10:38:40.777: INFO: Created: latency-svc-ffb54 May 12 10:38:40.791: INFO: Got endpoints: latency-svc-ffb54 [328.931407ms] May 12 10:38:40.826: INFO: Created: latency-svc-n5jqg May 12 10:38:40.844: INFO: Got endpoints: latency-svc-n5jqg [381.904124ms] May 12 10:38:40.939: INFO: Created: latency-svc-f99nf May 12 10:38:41.001: INFO: Got endpoints: latency-svc-f99nf [538.450631ms] May 12 10:38:41.002: INFO: Created: latency-svc-d6b4g May 12 10:38:41.113: INFO: Got endpoints: latency-svc-d6b4g [650.142108ms] May 12 10:38:41.151: INFO: Created: latency-svc-qbk46 May 12 10:38:41.206: INFO: Got endpoints: latency-svc-qbk46 [743.020536ms] May 12 10:38:41.347: INFO: Created: latency-svc-vlmr9 May 12 10:38:41.415: INFO: Got endpoints: latency-svc-vlmr9 [952.602986ms] May 12 10:38:41.417: INFO: Created: latency-svc-6vtht May 12 10:38:41.538: INFO: Got endpoints: latency-svc-6vtht [1.075199792s] May 12 10:38:41.548: INFO: Created: latency-svc-w4gxx May 12 10:38:41.553: INFO: Got endpoints: latency-svc-w4gxx [1.090617881s] May 12 10:38:41.584: INFO: Created: latency-svc-h6pwz May 12 10:38:41.595: INFO: Got endpoints: latency-svc-h6pwz [1.132511404s] May 12 10:38:41.614: INFO: Created: latency-svc-jtqpn May 12 10:38:41.636: INFO: Got endpoints: latency-svc-jtqpn [1.173561019s] May 12 10:38:41.694: INFO: Created: latency-svc-6q9sc May 12 10:38:41.727: INFO: Got endpoints: latency-svc-6q9sc [1.264581996s] May 12 10:38:41.770: INFO: Created: latency-svc-btt88 May 12 10:38:41.843: INFO: Got endpoints: latency-svc-btt88 [1.380569686s] May 12 10:38:41.883: INFO: Created: latency-svc-xsvhq May 12 10:38:41.903: INFO: Got endpoints: latency-svc-xsvhq [1.439935362s] May 12 10:38:41.925: INFO: Created: latency-svc-v4c4f May 12 10:38:41.940: INFO: Got endpoints: latency-svc-v4c4f [1.477409538s] May 12 10:38:41.984: INFO: Created: latency-svc-vnw57 May 12 10:38:41.999: INFO: Got endpoints: latency-svc-vnw57 [1.379895911s] May 12 10:38:42.021: INFO: Created: latency-svc-g49tx May 12 10:38:42.036: INFO: Got endpoints: latency-svc-g49tx [1.245195988s] May 12 10:38:42.057: INFO: Created: latency-svc-n57bf May 12 10:38:42.101: INFO: Got endpoints: latency-svc-n57bf [1.256243193s] May 12 10:38:42.116: INFO: Created: latency-svc-6nxcm May 12 10:38:42.149: INFO: Got endpoints: latency-svc-6nxcm [1.148383737s] May 12 10:38:42.177: INFO: Created: latency-svc-t8vv4 May 12 10:38:42.192: INFO: Got endpoints: latency-svc-t8vv4 [1.079458131s] May 12 10:38:42.244: INFO: Created: latency-svc-xtpqs May 12 10:38:42.248: INFO: Got endpoints: latency-svc-xtpqs [1.042022057s] May 12 10:38:42.302: INFO: Created: latency-svc-7k7wx May 12 10:38:42.318: INFO: Got endpoints: latency-svc-7k7wx [903.138222ms] May 12 10:38:42.339: INFO: Created: latency-svc-llmms May 12 10:38:42.412: INFO: Got endpoints: latency-svc-llmms [874.326249ms] May 12 10:38:42.415: INFO: Created: latency-svc-8vg5w May 12 10:38:42.427: INFO: Got endpoints: latency-svc-8vg5w [873.322024ms] May 12 10:38:42.452: INFO: Created: latency-svc-dqxg4 May 12 10:38:42.457: INFO: Got endpoints: latency-svc-dqxg4 [861.926906ms] May 12 10:38:42.495: INFO: Created: latency-svc-49v7d May 12 10:38:42.586: INFO: Got endpoints: latency-svc-49v7d [949.879428ms] May 12 10:38:42.621: INFO: Created: latency-svc-mwc27 May 12 10:38:42.638: INFO: Got endpoints: latency-svc-mwc27 [910.341566ms] May 12 10:38:42.666: INFO: Created: latency-svc-mpn2v May 12 10:38:42.783: INFO: Got endpoints: latency-svc-mpn2v [939.969288ms] May 12 10:38:42.785: INFO: Created: latency-svc-9g2m6 May 12 10:38:42.807: INFO: Got endpoints: latency-svc-9g2m6 [904.764474ms] May 12 10:38:42.836: INFO: Created: latency-svc-pk27l May 12 10:38:42.867: INFO: Got endpoints: latency-svc-pk27l [926.798525ms] May 12 10:38:42.939: INFO: Created: latency-svc-gwt7s May 12 10:38:42.943: INFO: Got endpoints: latency-svc-gwt7s [943.750774ms] May 12 10:38:43.029: INFO: Created: latency-svc-8xnb6 May 12 10:38:43.143: INFO: Got endpoints: latency-svc-8xnb6 [1.106552553s] May 12 10:38:43.144: INFO: Created: latency-svc-d7pj4 May 12 10:38:43.418: INFO: Got endpoints: latency-svc-d7pj4 [1.317074554s] May 12 10:38:43.586: INFO: Created: latency-svc-tsnd6 May 12 10:38:43.642: INFO: Created: latency-svc-shzxp May 12 10:38:43.643: INFO: Got endpoints: latency-svc-tsnd6 [1.493410826s] May 12 10:38:43.867: INFO: Got endpoints: latency-svc-shzxp [1.674569289s] May 12 10:38:43.924: INFO: Created: latency-svc-pwz66 May 12 10:38:44.130: INFO: Got endpoints: latency-svc-pwz66 [1.882491274s] May 12 10:38:44.207: INFO: Created: latency-svc-rff2l May 12 10:38:44.343: INFO: Got endpoints: latency-svc-rff2l [2.024709433s] May 12 10:38:44.387: INFO: Created: latency-svc-4mhgk May 12 10:38:44.527: INFO: Got endpoints: latency-svc-4mhgk [2.114842407s] May 12 10:38:44.610: INFO: Created: latency-svc-6h7d7 May 12 10:38:44.723: INFO: Got endpoints: latency-svc-6h7d7 [2.296687358s] May 12 10:38:44.981: INFO: Created: latency-svc-7jdbq May 12 10:38:44.998: INFO: Got endpoints: latency-svc-7jdbq [2.541274125s] May 12 10:38:45.043: INFO: Created: latency-svc-5l2wg May 12 10:38:45.215: INFO: Got endpoints: latency-svc-5l2wg [2.628398847s] May 12 10:38:45.242: INFO: Created: latency-svc-rj44z May 12 10:38:45.255: INFO: Got endpoints: latency-svc-rj44z [2.61689437s] May 12 10:38:45.283: INFO: Created: latency-svc-k5bgk May 12 10:38:45.309: INFO: Got endpoints: latency-svc-k5bgk [2.525861494s] May 12 10:38:45.418: INFO: Created: latency-svc-pmkkd May 12 10:38:45.446: INFO: Got endpoints: latency-svc-pmkkd [2.638879371s] May 12 10:38:45.504: INFO: Created: latency-svc-n5bkc May 12 10:38:45.652: INFO: Got endpoints: latency-svc-n5bkc [2.785742939s] May 12 10:38:45.655: INFO: Created: latency-svc-246gk May 12 10:38:45.699: INFO: Got endpoints: latency-svc-246gk [2.756026076s] May 12 10:38:46.384: INFO: Created: latency-svc-x8nnt May 12 10:38:46.388: INFO: Got endpoints: latency-svc-x8nnt [3.245360507s] May 12 10:38:46.993: INFO: Created: latency-svc-tllwf May 12 10:38:47.406: INFO: Got endpoints: latency-svc-tllwf [3.988169482s] May 12 10:38:47.413: INFO: Created: latency-svc-n5mjg May 12 10:38:47.701: INFO: Got endpoints: latency-svc-n5mjg [4.057886891s] May 12 10:38:48.139: INFO: Created: latency-svc-22k8j May 12 10:38:48.156: INFO: Got endpoints: latency-svc-22k8j [4.28956044s] May 12 10:38:49.235: INFO: Created: latency-svc-nspjh May 12 10:38:49.481: INFO: Created: latency-svc-bfpbc May 12 10:38:49.481: INFO: Got endpoints: latency-svc-nspjh [5.351226958s] May 12 10:38:49.537: INFO: Got endpoints: latency-svc-bfpbc [5.194254638s] May 12 10:38:49.724: INFO: Created: latency-svc-2gmb5 May 12 10:38:49.772: INFO: Got endpoints: latency-svc-2gmb5 [5.244484071s] May 12 10:38:49.913: INFO: Created: latency-svc-f7b26 May 12 10:38:49.999: INFO: Got endpoints: latency-svc-f7b26 [5.275700272s] May 12 10:38:50.160: INFO: Created: latency-svc-pbmvs May 12 10:38:50.174: INFO: Got endpoints: latency-svc-pbmvs [5.175066439s] May 12 10:38:50.463: INFO: Created: latency-svc-kfxzj May 12 10:38:50.535: INFO: Got endpoints: latency-svc-kfxzj [5.320145734s] May 12 10:38:50.724: INFO: Created: latency-svc-2r7mq May 12 10:38:50.761: INFO: Got endpoints: latency-svc-2r7mq [5.506216984s] May 12 10:38:50.903: INFO: Created: latency-svc-xzfqk May 12 10:38:51.119: INFO: Got endpoints: latency-svc-xzfqk [5.809571685s] May 12 10:38:51.122: INFO: Created: latency-svc-zpmvg May 12 10:38:51.188: INFO: Got endpoints: latency-svc-zpmvg [5.742048189s] May 12 10:38:51.190: INFO: Created: latency-svc-pq8m4 May 12 10:38:51.292: INFO: Got endpoints: latency-svc-pq8m4 [5.639808075s] May 12 10:38:51.338: INFO: Created: latency-svc-rjtkx May 12 10:38:51.366: INFO: Got endpoints: latency-svc-rjtkx [5.666791924s] May 12 10:38:51.478: INFO: Created: latency-svc-rpfst May 12 10:38:51.525: INFO: Got endpoints: latency-svc-rpfst [5.136664637s] May 12 10:38:51.711: INFO: Created: latency-svc-ls4sr May 12 10:38:51.746: INFO: Got endpoints: latency-svc-ls4sr [4.339843195s] May 12 10:38:51.839: INFO: Created: latency-svc-bc2cb May 12 10:38:51.842: INFO: Got endpoints: latency-svc-bc2cb [4.141427676s] May 12 10:38:52.005: INFO: Created: latency-svc-m52wp May 12 10:38:52.041: INFO: Got endpoints: latency-svc-m52wp [3.884776016s] May 12 10:38:52.431: INFO: Created: latency-svc-cc6qs May 12 10:38:52.675: INFO: Got endpoints: latency-svc-cc6qs [3.193879254s] May 12 10:38:52.957: INFO: Created: latency-svc-ktnqt May 12 10:38:53.187: INFO: Got endpoints: latency-svc-ktnqt [3.649591099s] May 12 10:38:53.652: INFO: Created: latency-svc-mnzz9 May 12 10:38:53.934: INFO: Created: latency-svc-5ptlk May 12 10:38:53.934: INFO: Got endpoints: latency-svc-mnzz9 [4.162334493s] May 12 10:38:53.983: INFO: Got endpoints: latency-svc-5ptlk [3.983608438s] May 12 10:38:54.149: INFO: Created: latency-svc-l9hd2 May 12 10:38:54.175: INFO: Got endpoints: latency-svc-l9hd2 [4.001488628s] May 12 10:38:54.371: INFO: Created: latency-svc-lr24v May 12 10:38:54.374: INFO: Got endpoints: latency-svc-lr24v [3.838879537s] May 12 10:38:54.628: INFO: Created: latency-svc-gr2h9 May 12 10:38:54.633: INFO: Got endpoints: latency-svc-gr2h9 [3.871898285s] May 12 10:38:55.248: INFO: Created: latency-svc-hkgsf May 12 10:38:55.322: INFO: Got endpoints: latency-svc-hkgsf [4.20272877s] May 12 10:38:55.568: INFO: Created: latency-svc-6hf9c May 12 10:38:55.626: INFO: Got endpoints: latency-svc-6hf9c [4.43789701s] May 12 10:38:55.820: INFO: Created: latency-svc-s7d76 May 12 10:38:56.113: INFO: Got endpoints: latency-svc-s7d76 [791.590189ms] May 12 10:38:56.352: INFO: Created: latency-svc-pn2dv May 12 10:38:56.358: INFO: Got endpoints: latency-svc-pn2dv [5.066046005s] May 12 10:38:56.898: INFO: Created: latency-svc-qbbjn May 12 10:38:56.908: INFO: Got endpoints: latency-svc-qbbjn [5.542350379s] May 12 10:38:57.058: INFO: Created: latency-svc-8f8f4 May 12 10:38:57.083: INFO: Got endpoints: latency-svc-8f8f4 [5.557523482s] May 12 10:38:57.307: INFO: Created: latency-svc-7qxn9 May 12 10:38:57.400: INFO: Got endpoints: latency-svc-7qxn9 [5.654496356s] May 12 10:38:57.746: INFO: Created: latency-svc-ftv4d May 12 10:38:57.755: INFO: Got endpoints: latency-svc-ftv4d [5.91296902s] May 12 10:38:57.879: INFO: Created: latency-svc-5g79j May 12 10:38:57.909: INFO: Got endpoints: latency-svc-5g79j [5.868018715s] May 12 10:38:57.951: INFO: Created: latency-svc-wnlgp May 12 10:38:57.965: INFO: Got endpoints: latency-svc-wnlgp [5.289182332s] May 12 10:38:58.365: INFO: Created: latency-svc-7xzbk May 12 10:38:58.432: INFO: Got endpoints: latency-svc-7xzbk [5.245112992s] May 12 10:38:58.587: INFO: Created: latency-svc-42v9r May 12 10:38:58.612: INFO: Got endpoints: latency-svc-42v9r [4.678039914s] May 12 10:38:58.718: INFO: Created: latency-svc-vnkdp May 12 10:38:58.727: INFO: Got endpoints: latency-svc-vnkdp [4.743838035s] May 12 10:38:58.779: INFO: Created: latency-svc-v6l49 May 12 10:38:58.870: INFO: Got endpoints: latency-svc-v6l49 [4.694639604s] May 12 10:38:59.025: INFO: Created: latency-svc-8vspk May 12 10:38:59.027: INFO: Got endpoints: latency-svc-8vspk [4.653400171s] May 12 10:38:59.118: INFO: Created: latency-svc-pxhcv May 12 10:38:59.287: INFO: Got endpoints: latency-svc-pxhcv [4.653629572s] May 12 10:38:59.311: INFO: Created: latency-svc-jb6ml May 12 10:38:59.369: INFO: Got endpoints: latency-svc-jb6ml [3.742910086s] May 12 10:38:59.472: INFO: Created: latency-svc-hdb7m May 12 10:38:59.538: INFO: Got endpoints: latency-svc-hdb7m [3.425045431s] May 12 10:38:59.694: INFO: Created: latency-svc-pgp88 May 12 10:39:00.317: INFO: Got endpoints: latency-svc-pgp88 [3.958814193s] May 12 10:39:00.320: INFO: Created: latency-svc-d5dx8 May 12 10:39:00.676: INFO: Got endpoints: latency-svc-d5dx8 [3.768019625s] May 12 10:39:00.720: INFO: Created: latency-svc-z8vmt May 12 10:39:00.921: INFO: Got endpoints: latency-svc-z8vmt [3.838416498s] May 12 10:39:01.236: INFO: Created: latency-svc-xqtn2 May 12 10:39:01.271: INFO: Got endpoints: latency-svc-xqtn2 [3.870239083s] May 12 10:39:01.394: INFO: Created: latency-svc-vvm4w May 12 10:39:01.398: INFO: Got endpoints: latency-svc-vvm4w [3.64279819s] May 12 10:39:02.031: INFO: Created: latency-svc-vvwdt May 12 10:39:02.034: INFO: Got endpoints: latency-svc-vvwdt [4.125037603s] May 12 10:39:02.347: INFO: Created: latency-svc-d6mrw May 12 10:39:02.520: INFO: Got endpoints: latency-svc-d6mrw [4.555273181s] May 12 10:39:02.545: INFO: Created: latency-svc-kzknw May 12 10:39:02.595: INFO: Got endpoints: latency-svc-kzknw [4.162843312s] May 12 10:39:02.732: INFO: Created: latency-svc-brwx6 May 12 10:39:02.746: INFO: Got endpoints: latency-svc-brwx6 [4.133640591s] May 12 10:39:02.811: INFO: Created: latency-svc-lldc7 May 12 10:39:02.879: INFO: Got endpoints: latency-svc-lldc7 [4.152312662s] May 12 10:39:02.904: INFO: Created: latency-svc-s4dkb May 12 10:39:02.923: INFO: Got endpoints: latency-svc-s4dkb [4.052793022s] May 12 10:39:02.947: INFO: Created: latency-svc-clrgd May 12 10:39:02.969: INFO: Got endpoints: latency-svc-clrgd [3.941751162s] May 12 10:39:03.115: INFO: Created: latency-svc-flp5z May 12 10:39:03.173: INFO: Got endpoints: latency-svc-flp5z [3.885988238s] May 12 10:39:03.814: INFO: Created: latency-svc-2czt9 May 12 10:39:04.111: INFO: Got endpoints: latency-svc-2czt9 [4.741807991s] May 12 10:39:04.471: INFO: Created: latency-svc-xr6lv May 12 10:39:04.473: INFO: Got endpoints: latency-svc-xr6lv [4.93482541s] May 12 10:39:04.730: INFO: Created: latency-svc-mgzs2 May 12 10:39:04.797: INFO: Got endpoints: latency-svc-mgzs2 [4.480103272s] May 12 10:39:04.927: INFO: Created: latency-svc-mhplx May 12 10:39:04.959: INFO: Got endpoints: latency-svc-mhplx [4.282567396s] May 12 10:39:05.314: INFO: Created: latency-svc-s244h May 12 10:39:05.683: INFO: Got endpoints: latency-svc-s244h [4.761735908s] May 12 10:39:05.759: INFO: Created: latency-svc-6l9l6 May 12 10:39:05.988: INFO: Got endpoints: latency-svc-6l9l6 [4.716856092s] May 12 10:39:06.566: INFO: Created: latency-svc-cv5h9 May 12 10:39:06.867: INFO: Got endpoints: latency-svc-cv5h9 [5.469355685s] May 12 10:39:06.876: INFO: Created: latency-svc-jqs6k May 12 10:39:06.907: INFO: Got endpoints: latency-svc-jqs6k [4.873069391s] May 12 10:39:07.273: INFO: Created: latency-svc-snvhn May 12 10:39:07.676: INFO: Got endpoints: latency-svc-snvhn [5.156201389s] May 12 10:39:08.006: INFO: Created: latency-svc-6pxw4 May 12 10:39:08.012: INFO: Got endpoints: latency-svc-6pxw4 [5.416714585s] May 12 10:39:09.036: INFO: Created: latency-svc-qj9rn May 12 10:39:09.329: INFO: Got endpoints: latency-svc-qj9rn [6.583316049s] May 12 10:39:09.338: INFO: Created: latency-svc-rd2ff May 12 10:39:09.389: INFO: Got endpoints: latency-svc-rd2ff [6.509969652s] May 12 10:39:09.780: INFO: Created: latency-svc-z2dqz May 12 10:39:09.814: INFO: Got endpoints: latency-svc-z2dqz [6.89175705s] May 12 10:39:10.052: INFO: Created: latency-svc-7ncll May 12 10:39:10.091: INFO: Got endpoints: latency-svc-7ncll [7.121925927s] May 12 10:39:10.212: INFO: Created: latency-svc-4s67t May 12 10:39:10.252: INFO: Got endpoints: latency-svc-4s67t [7.078455781s] May 12 10:39:10.364: INFO: Created: latency-svc-d6rbn May 12 10:39:10.367: INFO: Got endpoints: latency-svc-d6rbn [6.256147236s] May 12 10:39:10.775: INFO: Created: latency-svc-6ddqn May 12 10:39:10.927: INFO: Got endpoints: latency-svc-6ddqn [6.453404019s] May 12 10:39:11.008: INFO: Created: latency-svc-zs66n May 12 10:39:11.083: INFO: Got endpoints: latency-svc-zs66n [6.285233756s] May 12 10:39:11.311: INFO: Created: latency-svc-g9m95 May 12 10:39:11.731: INFO: Created: latency-svc-cr558 May 12 10:39:11.731: INFO: Got endpoints: latency-svc-g9m95 [6.771611803s] May 12 10:39:12.173: INFO: Got endpoints: latency-svc-cr558 [6.489722133s] May 12 10:39:12.207: INFO: Created: latency-svc-r7zpw May 12 10:39:12.249: INFO: Got endpoints: latency-svc-r7zpw [6.261539911s] May 12 10:39:12.443: INFO: Created: latency-svc-prh5g May 12 10:39:12.465: INFO: Got endpoints: latency-svc-prh5g [5.597463571s] May 12 10:39:12.593: INFO: Created: latency-svc-r8hfq May 12 10:39:12.596: INFO: Got endpoints: latency-svc-r8hfq [5.688862675s] May 12 10:39:12.654: INFO: Created: latency-svc-v2pp9 May 12 10:39:12.663: INFO: Got endpoints: latency-svc-v2pp9 [4.986824162s] May 12 10:39:12.689: INFO: Created: latency-svc-bng52 May 12 10:39:12.772: INFO: Got endpoints: latency-svc-bng52 [4.759664171s] May 12 10:39:12.778: INFO: Created: latency-svc-hs8m2 May 12 10:39:12.797: INFO: Got endpoints: latency-svc-hs8m2 [3.467807924s] May 12 10:39:12.827: INFO: Created: latency-svc-7qs78 May 12 10:39:12.848: INFO: Got endpoints: latency-svc-7qs78 [3.458104549s] May 12 10:39:12.927: INFO: Created: latency-svc-swfgt May 12 10:39:12.954: INFO: Got endpoints: latency-svc-swfgt [3.139187948s] May 12 10:39:12.995: INFO: Created: latency-svc-j5xb2 May 12 10:39:13.014: INFO: Got endpoints: latency-svc-j5xb2 [2.922580064s] May 12 10:39:13.101: INFO: Created: latency-svc-fqgxr May 12 10:39:13.116: INFO: Got endpoints: latency-svc-fqgxr [2.863956263s] May 12 10:39:13.158: INFO: Created: latency-svc-lwqbs May 12 10:39:13.176: INFO: Got endpoints: latency-svc-lwqbs [2.808746963s] May 12 10:39:13.199: INFO: Created: latency-svc-pkh2v May 12 10:39:13.257: INFO: Got endpoints: latency-svc-pkh2v [2.330483905s] May 12 10:39:13.283: INFO: Created: latency-svc-wjtsl May 12 10:39:13.298: INFO: Got endpoints: latency-svc-wjtsl [2.214921407s] May 12 10:39:13.331: INFO: Created: latency-svc-4tjq8 May 12 10:39:13.460: INFO: Got endpoints: latency-svc-4tjq8 [1.729434255s] May 12 10:39:13.467: INFO: Created: latency-svc-lm7gc May 12 10:39:13.483: INFO: Got endpoints: latency-svc-lm7gc [1.310298912s] May 12 10:39:13.517: INFO: Created: latency-svc-ppq9g May 12 10:39:13.547: INFO: Got endpoints: latency-svc-ppq9g [1.297645478s] May 12 10:39:13.597: INFO: Created: latency-svc-zzmj7 May 12 10:39:13.610: INFO: Got endpoints: latency-svc-zzmj7 [1.145178826s] May 12 10:39:13.630: INFO: Created: latency-svc-fdxnz May 12 10:39:13.647: INFO: Got endpoints: latency-svc-fdxnz [1.050557299s] May 12 10:39:13.673: INFO: Created: latency-svc-fddmh May 12 10:39:13.688: INFO: Got endpoints: latency-svc-fddmh [1.025361245s] May 12 10:39:13.735: INFO: Created: latency-svc-r6lp6 May 12 10:39:13.765: INFO: Created: latency-svc-6p957 May 12 10:39:13.765: INFO: Got endpoints: latency-svc-r6lp6 [993.392289ms] May 12 10:39:13.779: INFO: Got endpoints: latency-svc-6p957 [981.911791ms] May 12 10:39:13.799: INFO: Created: latency-svc-l75jp May 12 10:39:13.810: INFO: Got endpoints: latency-svc-l75jp [962.440758ms] May 12 10:39:13.829: INFO: Created: latency-svc-zqwsj May 12 10:39:13.891: INFO: Got endpoints: latency-svc-zqwsj [937.129023ms] May 12 10:39:13.912: INFO: Created: latency-svc-nxtpm May 12 10:39:13.931: INFO: Got endpoints: latency-svc-nxtpm [917.088613ms] May 12 10:39:13.955: INFO: Created: latency-svc-cdg49 May 12 10:39:13.985: INFO: Got endpoints: latency-svc-cdg49 [869.321798ms] May 12 10:39:14.053: INFO: Created: latency-svc-hkclv May 12 10:39:14.055: INFO: Got endpoints: latency-svc-hkclv [879.153778ms] May 12 10:39:14.116: INFO: Created: latency-svc-xxkwq May 12 10:39:14.135: INFO: Got endpoints: latency-svc-xxkwq [878.036054ms] May 12 10:39:14.233: INFO: Created: latency-svc-7jqlm May 12 10:39:14.268: INFO: Got endpoints: latency-svc-7jqlm [970.675469ms] May 12 10:39:14.413: INFO: Created: latency-svc-zvng8 May 12 10:39:14.424: INFO: Got endpoints: latency-svc-zvng8 [963.566438ms] May 12 10:39:14.459: INFO: Created: latency-svc-jnqhr May 12 10:39:14.556: INFO: Got endpoints: latency-svc-jnqhr [1.072336599s] May 12 10:39:14.591: INFO: Created: latency-svc-bdz6t May 12 10:39:14.617: INFO: Got endpoints: latency-svc-bdz6t [1.069657336s] May 12 10:39:14.712: INFO: Created: latency-svc-njsmh May 12 10:39:14.741: INFO: Got endpoints: latency-svc-njsmh [1.130630468s] May 12 10:39:14.742: INFO: Created: latency-svc-vpdxb May 12 10:39:14.778: INFO: Got endpoints: latency-svc-vpdxb [1.131401496s] May 12 10:39:14.904: INFO: Created: latency-svc-4nvkd May 12 10:39:14.911: INFO: Got endpoints: latency-svc-4nvkd [1.222647514s] May 12 10:39:14.964: INFO: Created: latency-svc-6mlw2 May 12 10:39:14.996: INFO: Got endpoints: latency-svc-6mlw2 [1.231154838s] May 12 10:39:15.161: INFO: Created: latency-svc-5glgj May 12 10:39:15.171: INFO: Got endpoints: latency-svc-5glgj [1.392324298s] May 12 10:39:15.196: INFO: Created: latency-svc-ln7b8 May 12 10:39:15.206: INFO: Got endpoints: latency-svc-ln7b8 [1.396440907s] May 12 10:39:15.251: INFO: Created: latency-svc-dbt7x May 12 10:39:15.310: INFO: Got endpoints: latency-svc-dbt7x [1.419546496s] May 12 10:39:15.329: INFO: Created: latency-svc-t4bls May 12 10:39:15.366: INFO: Got endpoints: latency-svc-t4bls [1.434815603s] May 12 10:39:15.562: INFO: Created: latency-svc-vpj85 May 12 10:39:15.563: INFO: Got endpoints: latency-svc-vpj85 [1.577359215s] May 12 10:39:15.599: INFO: Created: latency-svc-478dw May 12 10:39:15.624: INFO: Got endpoints: latency-svc-478dw [1.568106359s] May 12 10:39:15.729: INFO: Created: latency-svc-kp9cr May 12 10:39:15.761: INFO: Got endpoints: latency-svc-kp9cr [1.625805341s] May 12 10:39:15.945: INFO: Created: latency-svc-jdzdx May 12 10:39:16.003: INFO: Got endpoints: latency-svc-jdzdx [1.734358924s] May 12 10:39:16.005: INFO: Created: latency-svc-xcpv8 May 12 10:39:16.137: INFO: Got endpoints: latency-svc-xcpv8 [1.713176606s] May 12 10:39:16.194: INFO: Created: latency-svc-bbtll May 12 10:39:16.212: INFO: Got endpoints: latency-svc-bbtll [1.65656679s] May 12 10:39:16.293: INFO: Created: latency-svc-qtf6s May 12 10:39:16.296: INFO: Got endpoints: latency-svc-qtf6s [1.679605348s] May 12 10:39:16.380: INFO: Created: latency-svc-5mv94 May 12 10:39:16.472: INFO: Got endpoints: latency-svc-5mv94 [1.731537553s] May 12 10:39:16.488: INFO: Created: latency-svc-b4phv May 12 10:39:16.560: INFO: Got endpoints: latency-svc-b4phv [1.782111642s] May 12 10:39:16.685: INFO: Created: latency-svc-ghrw5 May 12 10:39:16.687: INFO: Got endpoints: latency-svc-ghrw5 [1.775807162s] May 12 10:39:16.740: INFO: Created: latency-svc-j6zdr May 12 10:39:16.826: INFO: Got endpoints: latency-svc-j6zdr [1.829632477s] May 12 10:39:16.836: INFO: Created: latency-svc-4s6f4 May 12 10:39:16.850: INFO: Got endpoints: latency-svc-4s6f4 [1.678287954s] May 12 10:39:16.910: INFO: Created: latency-svc-b9l5k May 12 10:39:16.922: INFO: Got endpoints: latency-svc-b9l5k [1.715697438s] May 12 10:39:16.970: INFO: Created: latency-svc-9cwb6 May 12 10:39:16.977: INFO: Got endpoints: latency-svc-9cwb6 [1.666747077s] May 12 10:39:17.029: INFO: Created: latency-svc-l7bd7 May 12 10:39:17.044: INFO: Got endpoints: latency-svc-l7bd7 [1.678345106s] May 12 10:39:17.131: INFO: Created: latency-svc-cn24f May 12 10:39:17.161: INFO: Got endpoints: latency-svc-cn24f [1.598352373s] May 12 10:39:17.161: INFO: Created: latency-svc-gdgtd May 12 10:39:17.191: INFO: Got endpoints: latency-svc-gdgtd [1.567585096s] May 12 10:39:17.220: INFO: Created: latency-svc-q8zr6 May 12 10:39:17.280: INFO: Got endpoints: latency-svc-q8zr6 [1.518671376s] May 12 10:39:17.304: INFO: Created: latency-svc-pckh4 May 12 10:39:17.318: INFO: Got endpoints: latency-svc-pckh4 [1.315727196s] May 12 10:39:17.376: INFO: Created: latency-svc-6vmjv May 12 10:39:17.448: INFO: Got endpoints: latency-svc-6vmjv [1.311354898s] May 12 10:39:17.450: INFO: Created: latency-svc-vjfmw May 12 10:39:17.457: INFO: Got endpoints: latency-svc-vjfmw [1.245177699s] May 12 10:39:17.478: INFO: Created: latency-svc-ps85p May 12 10:39:17.493: INFO: Got endpoints: latency-svc-ps85p [1.197048286s] May 12 10:39:17.514: INFO: Created: latency-svc-76hft May 12 10:39:17.658: INFO: Got endpoints: latency-svc-76hft [1.185480165s] May 12 10:39:17.700: INFO: Created: latency-svc-lrvtl May 12 10:39:17.746: INFO: Got endpoints: latency-svc-lrvtl [1.185940141s] May 12 10:39:17.807: INFO: Created: latency-svc-9g9x9 May 12 10:39:17.819: INFO: Got endpoints: latency-svc-9g9x9 [1.131887275s] May 12 10:39:17.875: INFO: Created: latency-svc-j85p6 May 12 10:39:17.897: INFO: Got endpoints: latency-svc-j85p6 [1.071395759s] May 12 10:39:17.951: INFO: Created: latency-svc-xw59l May 12 10:39:17.977: INFO: Got endpoints: latency-svc-xw59l [1.12716735s] May 12 10:39:18.001: INFO: Created: latency-svc-dxp7b May 12 10:39:18.006: INFO: Got endpoints: latency-svc-dxp7b [1.083510494s] May 12 10:39:18.036: INFO: Created: latency-svc-l8486 May 12 10:39:18.125: INFO: Got endpoints: latency-svc-l8486 [1.147898937s] May 12 10:39:18.127: INFO: Created: latency-svc-6bjp5 May 12 10:39:18.150: INFO: Got endpoints: latency-svc-6bjp5 [1.10635863s] May 12 10:39:18.204: INFO: Created: latency-svc-9c9j4 May 12 10:39:18.223: INFO: Got endpoints: latency-svc-9c9j4 [1.062004459s] May 12 10:39:18.293: INFO: Created: latency-svc-smbbd May 12 10:39:18.623: INFO: Got endpoints: latency-svc-smbbd [1.431425385s] May 12 10:39:18.631: INFO: Created: latency-svc-8z448 May 12 10:39:18.655: INFO: Got endpoints: latency-svc-8z448 [1.374879997s] May 12 10:39:18.852: INFO: Created: latency-svc-wgl7d May 12 10:39:18.889: INFO: Got endpoints: latency-svc-wgl7d [1.570478301s] May 12 10:39:19.042: INFO: Created: latency-svc-8shfs May 12 10:39:19.045: INFO: Got endpoints: latency-svc-8shfs [1.596567149s] May 12 10:39:19.106: INFO: Created: latency-svc-gz9b5 May 12 10:39:19.124: INFO: Got endpoints: latency-svc-gz9b5 [1.666556795s] May 12 10:39:19.230: INFO: Created: latency-svc-4lgqd May 12 10:39:19.238: INFO: Got endpoints: latency-svc-4lgqd [1.744681373s] May 12 10:39:19.261: INFO: Created: latency-svc-l42th May 12 10:39:19.277: INFO: Got endpoints: latency-svc-l42th [1.61935269s] May 12 10:39:19.277: INFO: Latencies: [157.287036ms 328.931407ms 381.904124ms 538.450631ms 650.142108ms 743.020536ms 791.590189ms 861.926906ms 869.321798ms 873.322024ms 874.326249ms 878.036054ms 879.153778ms 903.138222ms 904.764474ms 910.341566ms 917.088613ms 926.798525ms 937.129023ms 939.969288ms 943.750774ms 949.879428ms 952.602986ms 962.440758ms 963.566438ms 970.675469ms 981.911791ms 993.392289ms 1.025361245s 1.042022057s 1.050557299s 1.062004459s 1.069657336s 1.071395759s 1.072336599s 1.075199792s 1.079458131s 1.083510494s 1.090617881s 1.10635863s 1.106552553s 1.12716735s 1.130630468s 1.131401496s 1.131887275s 1.132511404s 1.145178826s 1.147898937s 1.148383737s 1.173561019s 1.185480165s 1.185940141s 1.197048286s 1.222647514s 1.231154838s 1.245177699s 1.245195988s 1.256243193s 1.264581996s 1.297645478s 1.310298912s 1.311354898s 1.315727196s 1.317074554s 1.374879997s 1.379895911s 1.380569686s 1.392324298s 1.396440907s 1.419546496s 1.431425385s 1.434815603s 1.439935362s 1.477409538s 1.493410826s 1.518671376s 1.567585096s 1.568106359s 1.570478301s 1.577359215s 1.596567149s 1.598352373s 1.61935269s 1.625805341s 1.65656679s 1.666556795s 1.666747077s 1.674569289s 1.678287954s 1.678345106s 1.679605348s 1.713176606s 1.715697438s 1.729434255s 1.731537553s 1.734358924s 1.744681373s 1.775807162s 1.782111642s 1.829632477s 1.882491274s 2.024709433s 2.114842407s 2.214921407s 2.296687358s 2.330483905s 2.525861494s 2.541274125s 2.61689437s 2.628398847s 2.638879371s 2.756026076s 2.785742939s 2.808746963s 2.863956263s 2.922580064s 3.139187948s 3.193879254s 3.245360507s 3.425045431s 3.458104549s 3.467807924s 3.64279819s 3.649591099s 3.742910086s 3.768019625s 3.838416498s 3.838879537s 3.870239083s 3.871898285s 3.884776016s 3.885988238s 3.941751162s 3.958814193s 3.983608438s 3.988169482s 4.001488628s 4.052793022s 4.057886891s 4.125037603s 4.133640591s 4.141427676s 4.152312662s 4.162334493s 4.162843312s 4.20272877s 4.282567396s 4.28956044s 4.339843195s 4.43789701s 4.480103272s 4.555273181s 4.653400171s 4.653629572s 4.678039914s 4.694639604s 4.716856092s 4.741807991s 4.743838035s 4.759664171s 4.761735908s 4.873069391s 4.93482541s 4.986824162s 5.066046005s 5.136664637s 5.156201389s 5.175066439s 5.194254638s 5.244484071s 5.245112992s 5.275700272s 5.289182332s 5.320145734s 5.351226958s 5.416714585s 5.469355685s 5.506216984s 5.542350379s 5.557523482s 5.597463571s 5.639808075s 5.654496356s 5.666791924s 5.688862675s 5.742048189s 5.809571685s 5.868018715s 5.91296902s 6.256147236s 6.261539911s 6.285233756s 6.453404019s 6.489722133s 6.509969652s 6.583316049s 6.771611803s 6.89175705s 7.078455781s 7.121925927s] May 12 10:39:19.278: INFO: 50 %ile: 1.882491274s May 12 10:39:19.278: INFO: 90 %ile: 5.597463571s May 12 10:39:19.278: INFO: 99 %ile: 7.078455781s May 12 10:39:19.278: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:39:19.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-615" for this suite. • [SLOW TEST:46.177 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":67,"skipped":1137,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:39:19.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:39:19.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3791' May 12 10:39:19.657: INFO: stderr: "" May 12 10:39:19.657: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 12 10:39:19.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3791' May 12 10:39:20.020: INFO: stderr: "" May 12 10:39:20.020: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 12 10:39:21.023: INFO: Selector matched 1 pods for map[app:agnhost] May 12 10:39:21.023: INFO: Found 0 / 1 May 12 10:39:22.024: INFO: Selector matched 1 pods for map[app:agnhost] May 12 10:39:22.024: INFO: Found 0 / 1 May 12 10:39:23.026: INFO: Selector matched 1 pods for map[app:agnhost] May 12 10:39:23.026: INFO: Found 0 / 1 May 12 10:39:24.024: INFO: Selector matched 1 pods for map[app:agnhost] May 12 10:39:24.024: INFO: Found 1 / 1 May 12 10:39:24.024: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 10:39:24.028: INFO: Selector matched 1 pods for map[app:agnhost] May 12 10:39:24.028: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 10:39:24.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-2sv8z --namespace=kubectl-3791' May 12 10:39:24.158: INFO: stderr: "" May 12 10:39:24.158: INFO: stdout: "Name: agnhost-master-2sv8z\nNamespace: kubectl-3791\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Tue, 12 May 2020 10:39:19 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.44\nIPs:\n IP: 10.244.2.44\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://07cec20a2997d7f3286b10b8acba299e251d641bb699096acab3af5c30b8c56c\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 12 May 2020 10:39:22 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-pg6cj (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-pg6cj:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-pg6cj\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-3791/agnhost-master-2sv8z to latest-worker2\n Normal Pulled 4s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 2s kubelet, latest-worker2 Started container agnhost-master\n" May 12 10:39:24.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3791' May 12 10:39:24.282: INFO: stderr: "" May 12 10:39:24.282: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3791\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-2sv8z\n" May 12 10:39:24.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3791' May 12 10:39:24.416: INFO: stderr: "" May 12 10:39:24.416: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3791\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.103.87.249\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.44:6379\nSession Affinity: None\nEvents: \n" May 12 10:39:24.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 12 10:39:24.643: INFO: stderr: "" May 12 10:39:24.643: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Tue, 12 May 2020 10:39:22 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 12 May 2020 10:38:58 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 12 May 2020 10:38:58 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 12 May 2020 10:38:58 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 12 May 2020 10:38:58 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 13d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 13d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 13d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 13d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 13d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 13d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 12 10:39:24.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-3791' May 12 10:39:24.782: INFO: stderr: "" May 12 10:39:24.782: INFO: stdout: "Name: kubectl-3791\nLabels: e2e-framework=kubectl\n e2e-run=919723fc-fe63-4f23-8afc-842e4e80785e\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:39:24.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3791" for this suite. • [SLOW TEST:5.497 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":68,"skipped":1166,"failed":0} SSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:39:24.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:39:24.996: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-b23910cb-c26e-4a37-9fd8-e35df5dd01c9" in namespace "security-context-test-6426" to be "Succeeded or Failed" May 12 10:39:25.061: INFO: Pod "alpine-nnp-false-b23910cb-c26e-4a37-9fd8-e35df5dd01c9": Phase="Pending", Reason="", readiness=false. Elapsed: 65.741115ms May 12 10:39:27.215: INFO: Pod "alpine-nnp-false-b23910cb-c26e-4a37-9fd8-e35df5dd01c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218923162s May 12 10:39:29.347: INFO: Pod "alpine-nnp-false-b23910cb-c26e-4a37-9fd8-e35df5dd01c9": Phase="Running", Reason="", readiness=true. Elapsed: 4.351080171s May 12 10:39:31.652: INFO: Pod "alpine-nnp-false-b23910cb-c26e-4a37-9fd8-e35df5dd01c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.656698712s May 12 10:39:31.652: INFO: Pod "alpine-nnp-false-b23910cb-c26e-4a37-9fd8-e35df5dd01c9" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:39:31.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6426" for this suite. • [SLOW TEST:7.117 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":69,"skipped":1169,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:39:31.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-0615f868-5e66-44f8-95fb-cd4367e8389e STEP: Creating secret with name s-test-opt-upd-0843658b-5fe2-471d-8c46-4e13c2cb2225 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0615f868-5e66-44f8-95fb-cd4367e8389e STEP: Updating secret s-test-opt-upd-0843658b-5fe2-471d-8c46-4e13c2cb2225 STEP: Creating secret with name s-test-opt-create-6c35e33c-5847-4dda-be79-210f5a09b5a7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:41:06.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5496" for this suite. • [SLOW TEST:94.611 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":70,"skipped":1177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:41:06.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7467.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7467.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7467.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7467.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7467.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7467.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 10:41:27.315: INFO: DNS probes using dns-7467/dns-test-5baaa10b-6c44-4ba5-9e56-49b956a36200 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:41:29.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7467" for this suite. • [SLOW TEST:24.875 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":71,"skipped":1233,"failed":0} SS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:41:31.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:41:34.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6972" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":72,"skipped":1235,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:41:35.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:41:36.515: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:41:46.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1225" for this suite. • [SLOW TEST:12.196 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":73,"skipped":1252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:41:47.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 12 10:41:47.910: INFO: >>> kubeConfig: /root/.kube/config May 12 10:41:49.922: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:42:03.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-53" for this suite. • [SLOW TEST:16.936 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":74,"skipped":1278,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:42:04.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 12 10:42:05.294: INFO: Pod name pod-release: Found 0 pods out of 1 May 12 10:42:10.516: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:42:10.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7210" for this suite. • [SLOW TEST:6.583 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":75,"skipped":1292,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:42:10.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 12 10:42:11.314: INFO: Waiting up to 5m0s for pod "client-containers-f1e59e65-692e-462a-8810-6b5d7e236a20" in namespace "containers-3097" to be "Succeeded or Failed" May 12 10:42:11.374: INFO: Pod "client-containers-f1e59e65-692e-462a-8810-6b5d7e236a20": Phase="Pending", Reason="", readiness=false. Elapsed: 59.509423ms May 12 10:42:13.750: INFO: Pod "client-containers-f1e59e65-692e-462a-8810-6b5d7e236a20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435808965s May 12 10:42:16.018: INFO: Pod "client-containers-f1e59e65-692e-462a-8810-6b5d7e236a20": Phase="Running", Reason="", readiness=true. Elapsed: 4.704295653s May 12 10:42:18.193: INFO: Pod "client-containers-f1e59e65-692e-462a-8810-6b5d7e236a20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.878983268s STEP: Saw pod success May 12 10:42:18.193: INFO: Pod "client-containers-f1e59e65-692e-462a-8810-6b5d7e236a20" satisfied condition "Succeeded or Failed" May 12 10:42:18.286: INFO: Trying to get logs from node latest-worker pod client-containers-f1e59e65-692e-462a-8810-6b5d7e236a20 container test-container: STEP: delete the pod May 12 10:42:19.549: INFO: Waiting for pod client-containers-f1e59e65-692e-462a-8810-6b5d7e236a20 to disappear May 12 10:42:19.882: INFO: Pod client-containers-f1e59e65-692e-462a-8810-6b5d7e236a20 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:42:19.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3097" for this suite. • [SLOW TEST:9.448 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":76,"skipped":1303,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:42:20.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 10:42:21.935: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 10:42:23.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876941, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876941, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876942, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876941, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:42:25.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876941, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876941, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876942, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876941, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 10:42:29.223: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:42:40.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4694" for this suite. STEP: Destroying namespace "webhook-4694-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.291 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":77,"skipped":1306,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:42:41.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 10:42:42.279: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07b599b9-a54e-4750-b8b7-f90be78f2030" in namespace "downward-api-7760" to be "Succeeded or Failed" May 12 10:42:42.366: INFO: Pod "downwardapi-volume-07b599b9-a54e-4750-b8b7-f90be78f2030": Phase="Pending", Reason="", readiness=false. Elapsed: 86.28254ms May 12 10:42:44.368: INFO: Pod "downwardapi-volume-07b599b9-a54e-4750-b8b7-f90be78f2030": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089241003s May 12 10:42:46.382: INFO: Pod "downwardapi-volume-07b599b9-a54e-4750-b8b7-f90be78f2030": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10251805s May 12 10:42:48.595: INFO: Pod "downwardapi-volume-07b599b9-a54e-4750-b8b7-f90be78f2030": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.316126796s STEP: Saw pod success May 12 10:42:48.595: INFO: Pod "downwardapi-volume-07b599b9-a54e-4750-b8b7-f90be78f2030" satisfied condition "Succeeded or Failed" May 12 10:42:48.972: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-07b599b9-a54e-4750-b8b7-f90be78f2030 container client-container: STEP: delete the pod May 12 10:42:50.157: INFO: Waiting for pod downwardapi-volume-07b599b9-a54e-4750-b8b7-f90be78f2030 to disappear May 12 10:42:50.391: INFO: Pod downwardapi-volume-07b599b9-a54e-4750-b8b7-f90be78f2030 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:42:50.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7760" for this suite. • [SLOW TEST:9.212 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":78,"skipped":1318,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:42:50.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 10:42:51.029: INFO: Waiting up to 5m0s for pod "downwardapi-volume-071aa860-5272-4e23-95a0-1907bfdb2cae" in namespace "projected-9090" to be "Succeeded or Failed" May 12 10:42:51.096: INFO: Pod "downwardapi-volume-071aa860-5272-4e23-95a0-1907bfdb2cae": Phase="Pending", Reason="", readiness=false. Elapsed: 66.963558ms May 12 10:42:53.223: INFO: Pod "downwardapi-volume-071aa860-5272-4e23-95a0-1907bfdb2cae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194278252s May 12 10:42:55.228: INFO: Pod "downwardapi-volume-071aa860-5272-4e23-95a0-1907bfdb2cae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198746768s May 12 10:42:57.231: INFO: Pod "downwardapi-volume-071aa860-5272-4e23-95a0-1907bfdb2cae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202051655s STEP: Saw pod success May 12 10:42:57.231: INFO: Pod "downwardapi-volume-071aa860-5272-4e23-95a0-1907bfdb2cae" satisfied condition "Succeeded or Failed" May 12 10:42:57.311: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-071aa860-5272-4e23-95a0-1907bfdb2cae container client-container: STEP: delete the pod May 12 10:42:57.407: INFO: Waiting for pod downwardapi-volume-071aa860-5272-4e23-95a0-1907bfdb2cae to disappear May 12 10:42:57.418: INFO: Pod downwardapi-volume-071aa860-5272-4e23-95a0-1907bfdb2cae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:42:57.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9090" for this suite. • [SLOW TEST:6.744 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":79,"skipped":1323,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:42:57.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 10:42:58.537: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 10:43:00.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876978, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876978, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876978, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876978, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 10:43:03.711: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:43:04.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9413" for this suite. STEP: Destroying namespace "webhook-9413-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.469 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":80,"skipped":1325,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:43:05.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 12 10:43:15.075: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:43:16.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2038" for this suite. • [SLOW TEST:10.272 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":81,"skipped":1330,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:43:16.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 12 10:43:18.291: INFO: Waiting up to 5m0s for pod "client-containers-e7349bb7-999b-471a-9ce7-f1ef12d24496" in namespace "containers-4805" to be "Succeeded or Failed" May 12 10:43:18.960: INFO: Pod "client-containers-e7349bb7-999b-471a-9ce7-f1ef12d24496": Phase="Pending", Reason="", readiness=false. Elapsed: 668.395147ms May 12 10:43:21.031: INFO: Pod "client-containers-e7349bb7-999b-471a-9ce7-f1ef12d24496": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739827467s May 12 10:43:23.444: INFO: Pod "client-containers-e7349bb7-999b-471a-9ce7-f1ef12d24496": Phase="Pending", Reason="", readiness=false. Elapsed: 5.152570413s May 12 10:43:25.852: INFO: Pod "client-containers-e7349bb7-999b-471a-9ce7-f1ef12d24496": Phase="Pending", Reason="", readiness=false. Elapsed: 7.56066295s May 12 10:43:27.983: INFO: Pod "client-containers-e7349bb7-999b-471a-9ce7-f1ef12d24496": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.691961292s STEP: Saw pod success May 12 10:43:27.984: INFO: Pod "client-containers-e7349bb7-999b-471a-9ce7-f1ef12d24496" satisfied condition "Succeeded or Failed" May 12 10:43:27.986: INFO: Trying to get logs from node latest-worker2 pod client-containers-e7349bb7-999b-471a-9ce7-f1ef12d24496 container test-container: STEP: delete the pod May 12 10:43:28.219: INFO: Waiting for pod client-containers-e7349bb7-999b-471a-9ce7-f1ef12d24496 to disappear May 12 10:43:28.421: INFO: Pod client-containers-e7349bb7-999b-471a-9ce7-f1ef12d24496 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:43:28.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4805" for this suite. • [SLOW TEST:12.919 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":82,"skipped":1342,"failed":0} SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:43:29.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:43:29.947: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 12 10:43:29.965: INFO: Pod name sample-pod: Found 0 pods out of 1 May 12 10:43:35.428: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 10:43:37.433: INFO: Creating deployment "test-rolling-update-deployment" May 12 10:43:37.437: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 12 10:43:37.448: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 12 10:43:39.455: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 12 10:43:39.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877017, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877017, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877017, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877017, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:43:41.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877017, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877017, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877017, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877017, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:43:43.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877017, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877017, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877017, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877017, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:43:45.620: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 12 10:43:46.019: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5087 /apis/apps/v1/namespaces/deployment-5087/deployments/test-rolling-update-deployment 203f3193-ac77-4282-b973-f54e8e99e2f8 3782881 1 2020-05-12 10:43:37 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-12 10:43:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-12 10:43:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c6d1f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-12 10:43:37 +0000 UTC,LastTransitionTime:2020-05-12 10:43:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-12 10:43:44 +0000 UTC,LastTransitionTime:2020-05-12 10:43:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 12 10:43:46.052: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-5087 /apis/apps/v1/namespaces/deployment-5087/replicasets/test-rolling-update-deployment-df7bb669b b0d06e17-58c2-4505-ac4b-29f0d7e8d138 3782869 1 2020-05-12 10:43:37 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 203f3193-ac77-4282-b973-f54e8e99e2f8 0xc003c6d780 0xc003c6d781}] [] [{kube-controller-manager Update apps/v1 2020-05-12 10:43:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"203f3193-ac77-4282-b973-f54e8e99e2f8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c6d7f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 12 10:43:46.052: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 12 10:43:46.052: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5087 /apis/apps/v1/namespaces/deployment-5087/replicasets/test-rolling-update-controller 3c1ead33-f09f-4b8a-ad55-f4c42830e3ac 3782880 2 2020-05-12 10:43:29 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 203f3193-ac77-4282-b973-f54e8e99e2f8 0xc003c6d677 0xc003c6d678}] [] [{e2e.test Update apps/v1 2020-05-12 10:43:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-12 10:43:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"203f3193-ac77-4282-b973-f54e8e99e2f8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003c6d718 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 10:43:46.372: INFO: Pod "test-rolling-update-deployment-df7bb669b-2b9wb" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-2b9wb test-rolling-update-deployment-df7bb669b- deployment-5087 /api/v1/namespaces/deployment-5087/pods/test-rolling-update-deployment-df7bb669b-2b9wb ac3b3afe-d431-4f09-99c6-cf0f0471d81d 3782868 0 2020-05-12 10:43:37 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b b0d06e17-58c2-4505-ac4b-29f0d7e8d138 0xc002a6cb10 0xc002a6cb11}] [] [{kube-controller-manager Update v1 2020-05-12 10:43:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b0d06e17-58c2-4505-ac4b-29f0d7e8d138\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 10:43:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.52\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d6lzw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d6lzw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d6lzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 10:43:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 10:43:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 10:43:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 10:43:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.52,StartTime:2020-05-12 10:43:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 10:43:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://6e96c2945add239174aab1885958846dc7fd5c3ab843eff08814e7274e71cf6e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.52,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:43:46.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5087" for this suite. • [SLOW TEST:17.289 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":83,"skipped":1344,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:43:46.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:43:46.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 12 10:43:48.026: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T10:43:47Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-12T10:43:47Z]] name:name1 resourceVersion:3782901 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:bbe6ee5f-58f9-454c-8c0f-d5971335c23a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 12 10:43:58.032: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T10:43:58Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-12T10:43:58Z]] name:name2 resourceVersion:3782953 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7fbff8e2-6123-44fc-83df-d7eb72651a19] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 12 10:44:08.039: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T10:43:47Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-12T10:44:08Z]] name:name1 resourceVersion:3782980 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:bbe6ee5f-58f9-454c-8c0f-d5971335c23a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 12 10:44:18.044: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T10:43:58Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-12T10:44:18Z]] name:name2 resourceVersion:3783008 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7fbff8e2-6123-44fc-83df-d7eb72651a19] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 12 10:44:28.055: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T10:43:47Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-12T10:44:08Z]] name:name1 resourceVersion:3783036 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:bbe6ee5f-58f9-454c-8c0f-d5971335c23a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 12 10:44:38.062: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T10:43:58Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-12T10:44:18Z]] name:name2 resourceVersion:3783063 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7fbff8e2-6123-44fc-83df-d7eb72651a19] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:44:48.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3701" for this suite. • [SLOW TEST:62.867 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":84,"skipped":1354,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:44:49.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-661e482c-39c6-4f6c-b670-42a9b4a9fb50 STEP: Creating a pod to test consume secrets May 12 10:44:49.574: INFO: Waiting up to 5m0s for pod "pod-secrets-2fb9b906-0997-449f-a037-ab56bd9eac42" in namespace "secrets-49" to be "Succeeded or Failed" May 12 10:44:49.585: INFO: Pod "pod-secrets-2fb9b906-0997-449f-a037-ab56bd9eac42": Phase="Pending", Reason="", readiness=false. Elapsed: 10.961089ms May 12 10:44:51.613: INFO: Pod "pod-secrets-2fb9b906-0997-449f-a037-ab56bd9eac42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039301914s May 12 10:44:53.620: INFO: Pod "pod-secrets-2fb9b906-0997-449f-a037-ab56bd9eac42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046091593s May 12 10:44:55.715: INFO: Pod "pod-secrets-2fb9b906-0997-449f-a037-ab56bd9eac42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14129804s STEP: Saw pod success May 12 10:44:55.715: INFO: Pod "pod-secrets-2fb9b906-0997-449f-a037-ab56bd9eac42" satisfied condition "Succeeded or Failed" May 12 10:44:55.718: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2fb9b906-0997-449f-a037-ab56bd9eac42 container secret-volume-test: STEP: delete the pod May 12 10:44:56.227: INFO: Waiting for pod pod-secrets-2fb9b906-0997-449f-a037-ab56bd9eac42 to disappear May 12 10:44:56.511: INFO: Pod pod-secrets-2fb9b906-0997-449f-a037-ab56bd9eac42 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:44:56.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-49" for this suite. • [SLOW TEST:7.271 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":85,"skipped":1358,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:44:56.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-7a99a750-cec4-4d74-b5e2-4eb921aa03f1 STEP: Creating a pod to test consume secrets May 12 10:44:56.840: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f177ee30-2f69-4f60-8202-69a907c3fb79" in namespace "projected-8996" to be "Succeeded or Failed" May 12 10:44:56.847: INFO: Pod "pod-projected-secrets-f177ee30-2f69-4f60-8202-69a907c3fb79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.337611ms May 12 10:44:59.039: INFO: Pod "pod-projected-secrets-f177ee30-2f69-4f60-8202-69a907c3fb79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198087679s May 12 10:45:01.218: INFO: Pod "pod-projected-secrets-f177ee30-2f69-4f60-8202-69a907c3fb79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377477453s May 12 10:45:03.353: INFO: Pod "pod-projected-secrets-f177ee30-2f69-4f60-8202-69a907c3fb79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.512671913s May 12 10:45:05.356: INFO: Pod "pod-projected-secrets-f177ee30-2f69-4f60-8202-69a907c3fb79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.515843511s STEP: Saw pod success May 12 10:45:05.356: INFO: Pod "pod-projected-secrets-f177ee30-2f69-4f60-8202-69a907c3fb79" satisfied condition "Succeeded or Failed" May 12 10:45:05.359: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-f177ee30-2f69-4f60-8202-69a907c3fb79 container secret-volume-test: STEP: delete the pod May 12 10:45:05.522: INFO: Waiting for pod pod-projected-secrets-f177ee30-2f69-4f60-8202-69a907c3fb79 to disappear May 12 10:45:05.568: INFO: Pod pod-projected-secrets-f177ee30-2f69-4f60-8202-69a907c3fb79 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:45:05.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8996" for this suite. • [SLOW TEST:9.179 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1368,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:45:05.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:45:10.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9978" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":87,"skipped":1370,"failed":0} ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:45:10.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 12 10:45:10.318: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-925 /api/v1/namespaces/watch-925/configmaps/e2e-watch-test-label-changed 29e5f8a7-fbb4-476b-9c97-5fe48c9ac037 3783220 0 2020-05-12 10:45:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-12 10:45:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 12 10:45:10.319: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-925 /api/v1/namespaces/watch-925/configmaps/e2e-watch-test-label-changed 29e5f8a7-fbb4-476b-9c97-5fe48c9ac037 3783221 0 2020-05-12 10:45:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-12 10:45:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 12 10:45:10.319: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-925 /api/v1/namespaces/watch-925/configmaps/e2e-watch-test-label-changed 29e5f8a7-fbb4-476b-9c97-5fe48c9ac037 3783222 0 2020-05-12 10:45:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-12 10:45:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 12 10:45:20.585: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-925 /api/v1/namespaces/watch-925/configmaps/e2e-watch-test-label-changed 29e5f8a7-fbb4-476b-9c97-5fe48c9ac037 3783270 0 2020-05-12 10:45:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-12 10:45:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 12 10:45:20.585: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-925 /api/v1/namespaces/watch-925/configmaps/e2e-watch-test-label-changed 29e5f8a7-fbb4-476b-9c97-5fe48c9ac037 3783271 0 2020-05-12 10:45:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-12 10:45:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 12 10:45:20.585: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-925 /api/v1/namespaces/watch-925/configmaps/e2e-watch-test-label-changed 29e5f8a7-fbb4-476b-9c97-5fe48c9ac037 3783272 0 2020-05-12 10:45:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-12 10:45:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:45:20.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-925" for this suite. • [SLOW TEST:10.555 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":88,"skipped":1370,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:45:20.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:45:20.865: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 12 10:45:25.881: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 10:45:25.881: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 12 10:45:28.018: INFO: Creating deployment "test-rollover-deployment" May 12 10:45:28.084: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 12 10:45:30.129: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 12 10:45:30.308: INFO: Ensure that both replica sets have 1 created replica May 12 10:45:30.317: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 12 10:45:30.323: INFO: Updating deployment test-rollover-deployment May 12 10:45:30.323: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 12 10:45:32.372: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 12 10:45:32.506: INFO: Make sure deployment "test-rollover-deployment" is complete May 12 10:45:32.511: INFO: all replica sets need to contain the pod-template-hash label May 12 10:45:32.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877130, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:45:34.518: INFO: all replica sets need to contain the pod-template-hash label May 12 10:45:34.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877130, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:45:36.519: INFO: all replica sets need to contain the pod-template-hash label May 12 10:45:36.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877135, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:45:38.517: INFO: all replica sets need to contain the pod-template-hash label May 12 10:45:38.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877135, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:45:40.586: INFO: all replica sets need to contain the pod-template-hash label May 12 10:45:40.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877135, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:45:42.538: INFO: all replica sets need to contain the pod-template-hash label May 12 10:45:42.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877135, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:45:44.520: INFO: all replica sets need to contain the pod-template-hash label May 12 10:45:44.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877135, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:45:46.773: INFO: May 12 10:45:46.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877145, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877128, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:45:48.517: INFO: May 12 10:45:48.517: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 12 10:45:48.523: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6934 /apis/apps/v1/namespaces/deployment-6934/deployments/test-rollover-deployment d6202376-8ae4-4058-be32-4506caaac45c 3783429 2 2020-05-12 10:45:28 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-12 10:45:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-12 10:45:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001252228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-12 10:45:28 +0000 UTC,LastTransitionTime:2020-05-12 10:45:28 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-12 10:45:47 +0000 UTC,LastTransitionTime:2020-05-12 10:45:28 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 12 10:45:48.526: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-6934 /apis/apps/v1/namespaces/deployment-6934/replicasets/test-rollover-deployment-7c4fd9c879 9da8350e-a3fa-4ea6-963c-d3d7cec7e33b 3783412 2 2020-05-12 10:45:30 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment d6202376-8ae4-4058-be32-4506caaac45c 0xc001252bc7 0xc001252bc8}] [] [{kube-controller-manager Update apps/v1 2020-05-12 10:45:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6202376-8ae4-4058-be32-4506caaac45c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001252c98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 12 10:45:48.526: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 12 10:45:48.526: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6934 /apis/apps/v1/namespaces/deployment-6934/replicasets/test-rollover-controller fa93748c-4b17-4791-abc3-06cf4a3f4b27 3783427 2 2020-05-12 10:45:20 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment d6202376-8ae4-4058-be32-4506caaac45c 0xc00125295f 0xc001252970}] [] [{e2e.test Update apps/v1 2020-05-12 10:45:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-12 10:45:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6202376-8ae4-4058-be32-4506caaac45c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001252a18 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 10:45:48.526: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-6934 /apis/apps/v1/namespaces/deployment-6934/replicasets/test-rollover-deployment-5686c4cfd5 73285848-5a8f-4dc4-ab38-bc9cb83a6265 3783358 2 2020-05-12 10:45:28 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment d6202376-8ae4-4058-be32-4506caaac45c 0xc001252ab7 0xc001252ab8}] [] [{kube-controller-manager Update apps/v1 2020-05-12 10:45:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6202376-8ae4-4058-be32-4506caaac45c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001252b48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 10:45:48.529: INFO: Pod "test-rollover-deployment-7c4fd9c879-bdnxc" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-bdnxc test-rollover-deployment-7c4fd9c879- deployment-6934 /api/v1/namespaces/deployment-6934/pods/test-rollover-deployment-7c4fd9c879-bdnxc 140c3937-dfed-4510-a351-979578ba3e5c 3783380 0 2020-05-12 10:45:30 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 9da8350e-a3fa-4ea6-963c-d3d7cec7e33b 0xc005440f67 0xc005440f68}] [] [{kube-controller-manager Update v1 2020-05-12 10:45:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9da8350e-a3fa-4ea6-963c-d3d7cec7e33b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 10:45:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.57\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8thkf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8thkf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8thkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 10:45:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 10:45:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 10:45:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 10:45:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.57,StartTime:2020-05-12 10:45:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 10:45:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://cefe8109816c5a66616d33e2874ebc6e89e014ee260e9e4da374992f89a07d86,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:45:48.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6934" for this suite. • [SLOW TEST:27.869 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":89,"skipped":1381,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:45:48.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-535 STEP: creating service affinity-clusterip in namespace services-535 STEP: creating replication controller affinity-clusterip in namespace services-535 I0512 10:45:49.008175 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-535, replica count: 3 I0512 10:45:52.058499 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:45:55.058735 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:45:58.058955 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 10:45:58.065: INFO: Creating new exec pod May 12 10:46:03.186: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-535 execpod-affinityskl98 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 12 10:46:12.737: INFO: stderr: "I0512 10:46:12.646712 1546 log.go:172] (0xc00003a420) (0xc00069f680) Create stream\nI0512 10:46:12.646776 1546 log.go:172] (0xc00003a420) (0xc00069f680) Stream added, broadcasting: 1\nI0512 10:46:12.651981 1546 log.go:172] (0xc00003a420) Reply frame received for 1\nI0512 10:46:12.652030 1546 log.go:172] (0xc00003a420) (0xc00066cf00) Create stream\nI0512 10:46:12.652047 1546 log.go:172] (0xc00003a420) (0xc00066cf00) Stream added, broadcasting: 3\nI0512 10:46:12.652904 1546 log.go:172] (0xc00003a420) Reply frame received for 3\nI0512 10:46:12.652946 1546 log.go:172] (0xc00003a420) (0xc00065c640) Create stream\nI0512 10:46:12.652960 1546 log.go:172] (0xc00003a420) (0xc00065c640) Stream added, broadcasting: 5\nI0512 10:46:12.654105 1546 log.go:172] (0xc00003a420) Reply frame received for 5\nI0512 10:46:12.728924 1546 log.go:172] (0xc00003a420) Data frame received for 5\nI0512 10:46:12.728962 1546 log.go:172] (0xc00065c640) (5) Data frame handling\nI0512 10:46:12.728989 1546 log.go:172] (0xc00065c640) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0512 10:46:12.730039 1546 log.go:172] (0xc00003a420) Data frame received for 3\nI0512 10:46:12.730073 1546 log.go:172] (0xc00066cf00) (3) Data frame handling\nI0512 10:46:12.730111 1546 log.go:172] (0xc00003a420) Data frame received for 5\nI0512 10:46:12.730135 1546 log.go:172] (0xc00065c640) (5) Data frame handling\nI0512 10:46:12.730155 1546 log.go:172] (0xc00065c640) (5) Data frame sent\nI0512 10:46:12.730179 1546 log.go:172] (0xc00003a420) Data frame received for 5\nI0512 10:46:12.730196 1546 log.go:172] (0xc00065c640) (5) Data frame handling\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0512 10:46:12.731628 1546 log.go:172] (0xc00003a420) Data frame received for 1\nI0512 10:46:12.731650 1546 log.go:172] (0xc00069f680) (1) Data frame handling\nI0512 10:46:12.731697 1546 log.go:172] (0xc00069f680) (1) Data frame sent\nI0512 10:46:12.731760 1546 log.go:172] (0xc00003a420) (0xc00069f680) Stream removed, broadcasting: 1\nI0512 10:46:12.731785 1546 log.go:172] (0xc00003a420) Go away received\nI0512 10:46:12.732168 1546 log.go:172] (0xc00003a420) (0xc00069f680) Stream removed, broadcasting: 1\nI0512 10:46:12.732185 1546 log.go:172] (0xc00003a420) (0xc00066cf00) Stream removed, broadcasting: 3\nI0512 10:46:12.732193 1546 log.go:172] (0xc00003a420) (0xc00065c640) Stream removed, broadcasting: 5\n" May 12 10:46:12.737: INFO: stdout: "" May 12 10:46:12.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-535 execpod-affinityskl98 -- /bin/sh -x -c nc -zv -t -w 2 10.111.181.73 80' May 12 10:46:12.944: INFO: stderr: "I0512 10:46:12.873664 1574 log.go:172] (0xc0009c3970) (0xc000387f40) Create stream\nI0512 10:46:12.873758 1574 log.go:172] (0xc0009c3970) (0xc000387f40) Stream added, broadcasting: 1\nI0512 10:46:12.876455 1574 log.go:172] (0xc0009c3970) Reply frame received for 1\nI0512 10:46:12.876494 1574 log.go:172] (0xc0009c3970) (0xc00030a6e0) Create stream\nI0512 10:46:12.876504 1574 log.go:172] (0xc0009c3970) (0xc00030a6e0) Stream added, broadcasting: 3\nI0512 10:46:12.877711 1574 log.go:172] (0xc0009c3970) Reply frame received for 3\nI0512 10:46:12.877752 1574 log.go:172] (0xc0009c3970) (0xc0000c4000) Create stream\nI0512 10:46:12.877770 1574 log.go:172] (0xc0009c3970) (0xc0000c4000) Stream added, broadcasting: 5\nI0512 10:46:12.878579 1574 log.go:172] (0xc0009c3970) Reply frame received for 5\nI0512 10:46:12.936924 1574 log.go:172] (0xc0009c3970) Data frame received for 5\nI0512 10:46:12.936945 1574 log.go:172] (0xc0000c4000) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.181.73 80\nConnection to 10.111.181.73 80 port [tcp/http] succeeded!\nI0512 10:46:12.936978 1574 log.go:172] (0xc0009c3970) Data frame received for 3\nI0512 10:46:12.937019 1574 log.go:172] (0xc00030a6e0) (3) Data frame handling\nI0512 10:46:12.937044 1574 log.go:172] (0xc0000c4000) (5) Data frame sent\nI0512 10:46:12.937053 1574 log.go:172] (0xc0009c3970) Data frame received for 5\nI0512 10:46:12.937058 1574 log.go:172] (0xc0000c4000) (5) Data frame handling\nI0512 10:46:12.938963 1574 log.go:172] (0xc0009c3970) Data frame received for 1\nI0512 10:46:12.938984 1574 log.go:172] (0xc000387f40) (1) Data frame handling\nI0512 10:46:12.939003 1574 log.go:172] (0xc000387f40) (1) Data frame sent\nI0512 10:46:12.939018 1574 log.go:172] (0xc0009c3970) (0xc000387f40) Stream removed, broadcasting: 1\nI0512 10:46:12.939040 1574 log.go:172] (0xc0009c3970) Go away received\nI0512 10:46:12.939411 1574 log.go:172] (0xc0009c3970) (0xc000387f40) Stream removed, broadcasting: 1\nI0512 10:46:12.939434 1574 log.go:172] (0xc0009c3970) (0xc00030a6e0) Stream removed, broadcasting: 3\nI0512 10:46:12.939443 1574 log.go:172] (0xc0009c3970) (0xc0000c4000) Stream removed, broadcasting: 5\n" May 12 10:46:12.944: INFO: stdout: "" May 12 10:46:12.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-535 execpod-affinityskl98 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.181.73:80/ ; done' May 12 10:46:13.241: INFO: stderr: "I0512 10:46:13.073764 1591 log.go:172] (0xc000cbb290) (0xc00072c640) Create stream\nI0512 10:46:13.073816 1591 log.go:172] (0xc000cbb290) (0xc00072c640) Stream added, broadcasting: 1\nI0512 10:46:13.078735 1591 log.go:172] (0xc000cbb290) Reply frame received for 1\nI0512 10:46:13.078782 1591 log.go:172] (0xc000cbb290) (0xc0006f1540) Create stream\nI0512 10:46:13.078795 1591 log.go:172] (0xc000cbb290) (0xc0006f1540) Stream added, broadcasting: 3\nI0512 10:46:13.080031 1591 log.go:172] (0xc000cbb290) Reply frame received for 3\nI0512 10:46:13.080082 1591 log.go:172] (0xc000cbb290) (0xc0006945a0) Create stream\nI0512 10:46:13.080099 1591 log.go:172] (0xc000cbb290) (0xc0006945a0) Stream added, broadcasting: 5\nI0512 10:46:13.081644 1591 log.go:172] (0xc000cbb290) Reply frame received for 5\nI0512 10:46:13.158004 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.158049 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.158069 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.158093 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.158104 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.158122 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.161251 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.161275 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.161282 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.161774 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.161807 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.161824 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.161852 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.161867 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.161878 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\nI0512 10:46:13.161891 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.161900 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.161927 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\nI0512 10:46:13.166062 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.166081 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.166092 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.167232 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.167251 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.167264 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.167284 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.167295 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.167353 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.171599 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.171617 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.171625 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.172005 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.172021 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.172029 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.172039 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.172045 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.172051 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.176052 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.176073 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.176091 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.176374 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.176402 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.176416 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.176436 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.176453 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.176476 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.180276 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.180293 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.180302 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.180838 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.180866 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.180887 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.180917 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.180931 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.180942 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.184771 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.184800 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.184815 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.185016 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.185038 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.185053 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\nI0512 10:46:13.185062 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.185069 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.185084 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\nI0512 10:46:13.185091 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.185098 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.185106 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.188605 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.188634 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.188663 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.189631 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.189664 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.189678 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.189700 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.189711 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.189725 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.193459 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.193490 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.193513 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.193641 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.193655 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.193662 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.193756 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.193772 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.193783 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.197660 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.197678 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.197695 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.198245 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.198286 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.198316 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.198342 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.198359 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.198383 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.202219 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.202250 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.202277 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.202628 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.202649 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.202673 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.202689 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.202699 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.202708 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.211295 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.211317 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.211332 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.212019 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.212047 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.212058 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.212070 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.212078 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.212086 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.215641 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.215658 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.215679 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.216194 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.216210 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.216220 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.216232 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.216242 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.216251 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\nI0512 10:46:13.216259 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.216281 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.216300 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\nI0512 10:46:13.219523 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.219540 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.219557 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.219895 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.219905 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.219911 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.219937 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.219947 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.219956 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.225658 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.225679 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.225691 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.226076 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.226097 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.226106 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.226119 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.226127 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.226135 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.230764 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.230777 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.230789 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.231265 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.231283 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.231300 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.231314 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.231326 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.231349 1591 log.go:172] (0xc0006945a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.181.73:80/\nI0512 10:46:13.234745 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.234770 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.234794 1591 log.go:172] (0xc0006f1540) (3) Data frame sent\nI0512 10:46:13.235244 1591 log.go:172] (0xc000cbb290) Data frame received for 5\nI0512 10:46:13.235270 1591 log.go:172] (0xc000cbb290) Data frame received for 3\nI0512 10:46:13.235287 1591 log.go:172] (0xc0006f1540) (3) Data frame handling\nI0512 10:46:13.235317 1591 log.go:172] (0xc0006945a0) (5) Data frame handling\nI0512 10:46:13.236659 1591 log.go:172] (0xc000cbb290) Data frame received for 1\nI0512 10:46:13.236674 1591 log.go:172] (0xc00072c640) (1) Data frame handling\nI0512 10:46:13.236686 1591 log.go:172] (0xc00072c640) (1) Data frame sent\nI0512 10:46:13.236702 1591 log.go:172] (0xc000cbb290) (0xc00072c640) Stream removed, broadcasting: 1\nI0512 10:46:13.236723 1591 log.go:172] (0xc000cbb290) Go away received\nI0512 10:46:13.237049 1591 log.go:172] (0xc000cbb290) (0xc00072c640) Stream removed, broadcasting: 1\nI0512 10:46:13.237067 1591 log.go:172] (0xc000cbb290) (0xc0006f1540) Stream removed, broadcasting: 3\nI0512 10:46:13.237074 1591 log.go:172] (0xc000cbb290) (0xc0006945a0) Stream removed, broadcasting: 5\n" May 12 10:46:13.242: INFO: stdout: "\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k\naffinity-clusterip-fck2k" May 12 10:46:13.242: INFO: Received response from host: May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Received response from host: affinity-clusterip-fck2k May 12 10:46:13.242: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-535, will wait for the garbage collector to delete the pods May 12 10:46:13.831: INFO: Deleting ReplicationController affinity-clusterip took: 207.04043ms May 12 10:46:14.631: INFO: Terminating ReplicationController affinity-clusterip pods took: 800.28238ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:46:28.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-535" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:41.295 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":90,"skipped":1393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:46:29.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1678 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 10:46:30.684: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 12 10:46:31.408: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:46:33.597: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:46:35.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:46:37.459: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:46:39.483: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:46:41.412: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:46:43.561: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:46:45.519: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:46:47.543: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:46:49.414: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:46:51.563: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:46:53.412: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:46:55.567: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:46:57.477: INFO: The status of Pod netserver-0 is Running (Ready = true) May 12 10:46:57.481: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 12 10:47:01.583: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.223:8080/dial?request=hostname&protocol=udp&host=10.244.1.222&port=8081&tries=1'] Namespace:pod-network-test-1678 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:47:01.583: INFO: >>> kubeConfig: /root/.kube/config I0512 10:47:01.614626 7 log.go:172] (0xc002a7f3f0) (0xc001388f00) Create stream I0512 10:47:01.614659 7 log.go:172] (0xc002a7f3f0) (0xc001388f00) Stream added, broadcasting: 1 I0512 10:47:01.619344 7 log.go:172] (0xc002a7f3f0) Reply frame received for 1 I0512 10:47:01.619401 7 log.go:172] (0xc002a7f3f0) (0xc0018ba280) Create stream I0512 10:47:01.619428 7 log.go:172] (0xc002a7f3f0) (0xc0018ba280) Stream added, broadcasting: 3 I0512 10:47:01.622546 7 log.go:172] (0xc002a7f3f0) Reply frame received for 3 I0512 10:47:01.622596 7 log.go:172] (0xc002a7f3f0) (0xc0018ba320) Create stream I0512 10:47:01.622610 7 log.go:172] (0xc002a7f3f0) (0xc0018ba320) Stream added, broadcasting: 5 I0512 10:47:01.623734 7 log.go:172] (0xc002a7f3f0) Reply frame received for 5 I0512 10:47:01.727165 7 log.go:172] (0xc002a7f3f0) Data frame received for 3 I0512 10:47:01.727193 7 log.go:172] (0xc0018ba280) (3) Data frame handling I0512 10:47:01.727207 7 log.go:172] (0xc0018ba280) (3) Data frame sent I0512 10:47:01.727710 7 log.go:172] (0xc002a7f3f0) Data frame received for 5 I0512 10:47:01.727734 7 log.go:172] (0xc0018ba320) (5) Data frame handling I0512 10:47:01.727893 7 log.go:172] (0xc002a7f3f0) Data frame received for 3 I0512 10:47:01.727930 7 log.go:172] (0xc0018ba280) (3) Data frame handling I0512 10:47:01.729453 7 log.go:172] (0xc002a7f3f0) Data frame received for 1 I0512 10:47:01.729472 7 log.go:172] (0xc001388f00) (1) Data frame handling I0512 10:47:01.729501 7 log.go:172] (0xc001388f00) (1) Data frame sent I0512 10:47:01.729518 7 log.go:172] (0xc002a7f3f0) (0xc001388f00) Stream removed, broadcasting: 1 I0512 10:47:01.729534 7 log.go:172] (0xc002a7f3f0) Go away received I0512 10:47:01.729638 7 log.go:172] (0xc002a7f3f0) (0xc001388f00) Stream removed, broadcasting: 1 I0512 10:47:01.729655 7 log.go:172] (0xc002a7f3f0) (0xc0018ba280) Stream removed, broadcasting: 3 I0512 10:47:01.729663 7 log.go:172] (0xc002a7f3f0) (0xc0018ba320) Stream removed, broadcasting: 5 May 12 10:47:01.729: INFO: Waiting for responses: map[] May 12 10:47:01.732: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.223:8080/dial?request=hostname&protocol=udp&host=10.244.2.60&port=8081&tries=1'] Namespace:pod-network-test-1678 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:47:01.732: INFO: >>> kubeConfig: /root/.kube/config I0512 10:47:01.757789 7 log.go:172] (0xc002a7fef0) (0xc001389a40) Create stream I0512 10:47:01.757824 7 log.go:172] (0xc002a7fef0) (0xc001389a40) Stream added, broadcasting: 1 I0512 10:47:01.759551 7 log.go:172] (0xc002a7fef0) Reply frame received for 1 I0512 10:47:01.759584 7 log.go:172] (0xc002a7fef0) (0xc0014ca0a0) Create stream I0512 10:47:01.759594 7 log.go:172] (0xc002a7fef0) (0xc0014ca0a0) Stream added, broadcasting: 3 I0512 10:47:01.760354 7 log.go:172] (0xc002a7fef0) Reply frame received for 3 I0512 10:47:01.760387 7 log.go:172] (0xc002a7fef0) (0xc0018ba460) Create stream I0512 10:47:01.760399 7 log.go:172] (0xc002a7fef0) (0xc0018ba460) Stream added, broadcasting: 5 I0512 10:47:01.761437 7 log.go:172] (0xc002a7fef0) Reply frame received for 5 I0512 10:47:01.818697 7 log.go:172] (0xc002a7fef0) Data frame received for 3 I0512 10:47:01.818727 7 log.go:172] (0xc0014ca0a0) (3) Data frame handling I0512 10:47:01.818747 7 log.go:172] (0xc0014ca0a0) (3) Data frame sent I0512 10:47:01.819209 7 log.go:172] (0xc002a7fef0) Data frame received for 5 I0512 10:47:01.819225 7 log.go:172] (0xc0018ba460) (5) Data frame handling I0512 10:47:01.819270 7 log.go:172] (0xc002a7fef0) Data frame received for 3 I0512 10:47:01.819294 7 log.go:172] (0xc0014ca0a0) (3) Data frame handling I0512 10:47:01.820669 7 log.go:172] (0xc002a7fef0) Data frame received for 1 I0512 10:47:01.820685 7 log.go:172] (0xc001389a40) (1) Data frame handling I0512 10:47:01.820698 7 log.go:172] (0xc001389a40) (1) Data frame sent I0512 10:47:01.820720 7 log.go:172] (0xc002a7fef0) (0xc001389a40) Stream removed, broadcasting: 1 I0512 10:47:01.820746 7 log.go:172] (0xc002a7fef0) Go away received I0512 10:47:01.820823 7 log.go:172] (0xc002a7fef0) (0xc001389a40) Stream removed, broadcasting: 1 I0512 10:47:01.820842 7 log.go:172] (0xc002a7fef0) (0xc0014ca0a0) Stream removed, broadcasting: 3 I0512 10:47:01.820851 7 log.go:172] (0xc002a7fef0) (0xc0018ba460) Stream removed, broadcasting: 5 May 12 10:47:01.820: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:47:01.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1678" for this suite. • [SLOW TEST:31.996 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":91,"skipped":1424,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:47:01.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 12 10:47:01.971: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 12 10:47:01.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1351' May 12 10:47:02.379: INFO: stderr: "" May 12 10:47:02.379: INFO: stdout: "service/agnhost-slave created\n" May 12 10:47:02.380: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 12 10:47:02.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1351' May 12 10:47:02.707: INFO: stderr: "" May 12 10:47:02.707: INFO: stdout: "service/agnhost-master created\n" May 12 10:47:02.707: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 12 10:47:02.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1351' May 12 10:47:03.087: INFO: stderr: "" May 12 10:47:03.087: INFO: stdout: "service/frontend created\n" May 12 10:47:03.088: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 12 10:47:03.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1351' May 12 10:47:03.456: INFO: stderr: "" May 12 10:47:03.456: INFO: stdout: "deployment.apps/frontend created\n" May 12 10:47:03.456: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 12 10:47:03.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1351' May 12 10:47:03.987: INFO: stderr: "" May 12 10:47:03.987: INFO: stdout: "deployment.apps/agnhost-master created\n" May 12 10:47:03.987: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 12 10:47:03.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1351' May 12 10:47:05.374: INFO: stderr: "" May 12 10:47:05.374: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 12 10:47:05.374: INFO: Waiting for all frontend pods to be Running. May 12 10:47:20.424: INFO: Waiting for frontend to serve content. May 12 10:47:20.596: INFO: Trying to add a new entry to the guestbook. May 12 10:47:20.649: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 12 10:47:21.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1351' May 12 10:47:23.536: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:47:23.536: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 12 10:47:23.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1351' May 12 10:47:24.728: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:47:24.728: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 12 10:47:24.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1351' May 12 10:47:25.559: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:47:25.559: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 10:47:25.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1351' May 12 10:47:25.885: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:47:25.885: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 10:47:25.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1351' May 12 10:47:26.040: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:47:26.040: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 12 10:47:26.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1351' May 12 10:47:26.825: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:47:26.826: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:47:26.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1351" for this suite. • [SLOW TEST:25.895 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":92,"skipped":1441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:47:27.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 12 10:47:28.765: INFO: Waiting up to 5m0s for pod "downward-api-78402bd6-82a9-4ac1-b2bf-1bb0a713ffe0" in namespace "downward-api-1462" to be "Succeeded or Failed" May 12 10:47:28.989: INFO: Pod "downward-api-78402bd6-82a9-4ac1-b2bf-1bb0a713ffe0": Phase="Pending", Reason="", readiness=false. Elapsed: 223.693832ms May 12 10:47:30.992: INFO: Pod "downward-api-78402bd6-82a9-4ac1-b2bf-1bb0a713ffe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226692508s May 12 10:47:33.089: INFO: Pod "downward-api-78402bd6-82a9-4ac1-b2bf-1bb0a713ffe0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323086328s May 12 10:47:35.240: INFO: Pod "downward-api-78402bd6-82a9-4ac1-b2bf-1bb0a713ffe0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.474158825s May 12 10:47:37.277: INFO: Pod "downward-api-78402bd6-82a9-4ac1-b2bf-1bb0a713ffe0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.511848568s May 12 10:47:39.652: INFO: Pod "downward-api-78402bd6-82a9-4ac1-b2bf-1bb0a713ffe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.886602978s STEP: Saw pod success May 12 10:47:39.652: INFO: Pod "downward-api-78402bd6-82a9-4ac1-b2bf-1bb0a713ffe0" satisfied condition "Succeeded or Failed" May 12 10:47:39.676: INFO: Trying to get logs from node latest-worker pod downward-api-78402bd6-82a9-4ac1-b2bf-1bb0a713ffe0 container dapi-container: STEP: delete the pod May 12 10:47:39.803: INFO: Waiting for pod downward-api-78402bd6-82a9-4ac1-b2bf-1bb0a713ffe0 to disappear May 12 10:47:39.807: INFO: Pod downward-api-78402bd6-82a9-4ac1-b2bf-1bb0a713ffe0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:47:39.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1462" for this suite. • [SLOW TEST:12.110 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":93,"skipped":1466,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:47:39.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 10:47:40.845: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 10:47:43.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877261, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877261, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877261, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877260, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:47:45.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877261, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877261, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877261, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877260, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:47:47.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877261, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877261, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877261, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877260, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 10:47:50.346: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:47:53.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7535" for this suite. STEP: Destroying namespace "webhook-7535-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.895 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":94,"skipped":1473,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:47:54.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 12 10:47:56.115: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 10:47:56.423: INFO: Waiting for terminating namespaces to be deleted... May 12 10:47:56.425: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 12 10:47:56.431: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 12 10:47:56.431: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 12 10:47:56.431: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 12 10:47:56.431: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 12 10:47:56.431: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 12 10:47:56.431: INFO: Container kindnet-cni ready: true, restart count 0 May 12 10:47:56.431: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 12 10:47:56.431: INFO: Container kube-proxy ready: true, restart count 0 May 12 10:47:56.431: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 12 10:47:56.435: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 12 10:47:56.436: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 12 10:47:56.436: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 12 10:47:56.436: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 12 10:47:56.436: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 12 10:47:56.436: INFO: Container kindnet-cni ready: true, restart count 0 May 12 10:47:56.436: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 12 10:47:56.436: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 12 10:47:57.896: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 12 10:47:57.896: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 May 12 10:47:57.896: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 12 10:47:57.896: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 12 10:47:57.896: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 12 10:47:57.896: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 12 10:47:57.896: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 May 12 10:47:58.145: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-38eabcfe-d732-42a5-a008-301247f65e13.160e422ccb289a45], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3851/filler-pod-38eabcfe-d732-42a5-a008-301247f65e13 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-38eabcfe-d732-42a5-a008-301247f65e13.160e422da35e2fbc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-38eabcfe-d732-42a5-a008-301247f65e13.160e422e7be1a88b], Reason = [Created], Message = [Created container filler-pod-38eabcfe-d732-42a5-a008-301247f65e13] STEP: Considering event: Type = [Normal], Name = [filler-pod-38eabcfe-d732-42a5-a008-301247f65e13.160e422ea58e09d9], Reason = [Started], Message = [Started container filler-pod-38eabcfe-d732-42a5-a008-301247f65e13] STEP: Considering event: Type = [Normal], Name = [filler-pod-dbbd0b61-305d-45c6-9002-e3ad453749e7.160e422cc61881ce], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3851/filler-pod-dbbd0b61-305d-45c6-9002-e3ad453749e7 to latest-worker2] STEP: Considering event: Type = [Warning], Name = [filler-pod-dbbd0b61-305d-45c6-9002-e3ad453749e7.160e422d0d3a265d], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "default-token-qckj9" : failed to sync secret cache: timed out waiting for the condition] STEP: Considering event: Type = [Normal], Name = [filler-pod-dbbd0b61-305d-45c6-9002-e3ad453749e7.160e422dd4068951], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-dbbd0b61-305d-45c6-9002-e3ad453749e7.160e422e8ac3cfe1], Reason = [Created], Message = [Created container filler-pod-dbbd0b61-305d-45c6-9002-e3ad453749e7] STEP: Considering event: Type = [Normal], Name = [filler-pod-dbbd0b61-305d-45c6-9002-e3ad453749e7.160e422eafcaddc6], Reason = [Started], Message = [Started container filler-pod-dbbd0b61-305d-45c6-9002-e3ad453749e7] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e422f381717d6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e422f47dc39ca], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:48:10.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3851" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.204 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":95,"skipped":1493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:48:10.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 12 10:48:11.154: INFO: Waiting up to 5m0s for pod "var-expansion-27118d29-e86a-4def-9fca-5830933d6161" in namespace "var-expansion-5657" to be "Succeeded or Failed" May 12 10:48:11.172: INFO: Pod "var-expansion-27118d29-e86a-4def-9fca-5830933d6161": Phase="Pending", Reason="", readiness=false. Elapsed: 18.028168ms May 12 10:48:13.413: INFO: Pod "var-expansion-27118d29-e86a-4def-9fca-5830933d6161": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259241521s May 12 10:48:15.416: INFO: Pod "var-expansion-27118d29-e86a-4def-9fca-5830933d6161": Phase="Pending", Reason="", readiness=false. Elapsed: 4.262431596s May 12 10:48:17.499: INFO: Pod "var-expansion-27118d29-e86a-4def-9fca-5830933d6161": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.345384897s STEP: Saw pod success May 12 10:48:17.499: INFO: Pod "var-expansion-27118d29-e86a-4def-9fca-5830933d6161" satisfied condition "Succeeded or Failed" May 12 10:48:17.528: INFO: Trying to get logs from node latest-worker pod var-expansion-27118d29-e86a-4def-9fca-5830933d6161 container dapi-container: STEP: delete the pod May 12 10:48:17.803: INFO: Waiting for pod var-expansion-27118d29-e86a-4def-9fca-5830933d6161 to disappear May 12 10:48:18.028: INFO: Pod var-expansion-27118d29-e86a-4def-9fca-5830933d6161 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:48:18.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5657" for this suite. • [SLOW TEST:7.151 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":96,"skipped":1552,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:48:18.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5469 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 10:48:18.842: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 12 10:48:19.385: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:48:21.468: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:48:23.441: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:48:25.425: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 12 10:48:27.387: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:48:29.388: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:48:31.388: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:48:33.388: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:48:35.406: INFO: The status of Pod netserver-0 is Running (Ready = false) May 12 10:48:37.580: INFO: The status of Pod netserver-0 is Running (Ready = true) May 12 10:48:37.630: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 12 10:48:43.716: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.67:8080/dial?request=hostname&protocol=http&host=10.244.1.230&port=8080&tries=1'] Namespace:pod-network-test-5469 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:48:43.716: INFO: >>> kubeConfig: /root/.kube/config I0512 10:48:43.747492 7 log.go:172] (0xc0034ee0b0) (0xc002d0e3c0) Create stream I0512 10:48:43.747520 7 log.go:172] (0xc0034ee0b0) (0xc002d0e3c0) Stream added, broadcasting: 1 I0512 10:48:43.749552 7 log.go:172] (0xc0034ee0b0) Reply frame received for 1 I0512 10:48:43.749581 7 log.go:172] (0xc0034ee0b0) (0xc0011ad720) Create stream I0512 10:48:43.749595 7 log.go:172] (0xc0034ee0b0) (0xc0011ad720) Stream added, broadcasting: 3 I0512 10:48:43.750631 7 log.go:172] (0xc0034ee0b0) Reply frame received for 3 I0512 10:48:43.750679 7 log.go:172] (0xc0034ee0b0) (0xc0010374a0) Create stream I0512 10:48:43.750691 7 log.go:172] (0xc0034ee0b0) (0xc0010374a0) Stream added, broadcasting: 5 I0512 10:48:43.751512 7 log.go:172] (0xc0034ee0b0) Reply frame received for 5 I0512 10:48:43.829796 7 log.go:172] (0xc0034ee0b0) Data frame received for 3 I0512 10:48:43.829835 7 log.go:172] (0xc0011ad720) (3) Data frame handling I0512 10:48:43.829860 7 log.go:172] (0xc0011ad720) (3) Data frame sent I0512 10:48:43.829980 7 log.go:172] (0xc0034ee0b0) Data frame received for 3 I0512 10:48:43.830025 7 log.go:172] (0xc0011ad720) (3) Data frame handling I0512 10:48:43.830941 7 log.go:172] (0xc0034ee0b0) Data frame received for 5 I0512 10:48:43.830976 7 log.go:172] (0xc0010374a0) (5) Data frame handling I0512 10:48:43.832572 7 log.go:172] (0xc0034ee0b0) Data frame received for 1 I0512 10:48:43.832732 7 log.go:172] (0xc002d0e3c0) (1) Data frame handling I0512 10:48:43.832756 7 log.go:172] (0xc002d0e3c0) (1) Data frame sent I0512 10:48:43.832771 7 log.go:172] (0xc0034ee0b0) (0xc002d0e3c0) Stream removed, broadcasting: 1 I0512 10:48:43.832793 7 log.go:172] (0xc0034ee0b0) Go away received I0512 10:48:43.833031 7 log.go:172] (0xc0034ee0b0) (0xc002d0e3c0) Stream removed, broadcasting: 1 I0512 10:48:43.833052 7 log.go:172] (0xc0034ee0b0) (0xc0011ad720) Stream removed, broadcasting: 3 I0512 10:48:43.833062 7 log.go:172] (0xc0034ee0b0) (0xc0010374a0) Stream removed, broadcasting: 5 May 12 10:48:43.833: INFO: Waiting for responses: map[] May 12 10:48:43.873: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.67:8080/dial?request=hostname&protocol=http&host=10.244.2.66&port=8080&tries=1'] Namespace:pod-network-test-5469 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:48:43.873: INFO: >>> kubeConfig: /root/.kube/config I0512 10:48:43.906761 7 log.go:172] (0xc00092cbb0) (0xc00111e000) Create stream I0512 10:48:43.906787 7 log.go:172] (0xc00092cbb0) (0xc00111e000) Stream added, broadcasting: 1 I0512 10:48:43.908781 7 log.go:172] (0xc00092cbb0) Reply frame received for 1 I0512 10:48:43.908810 7 log.go:172] (0xc00092cbb0) (0xc00111e3c0) Create stream I0512 10:48:43.908820 7 log.go:172] (0xc00092cbb0) (0xc00111e3c0) Stream added, broadcasting: 3 I0512 10:48:43.910059 7 log.go:172] (0xc00092cbb0) Reply frame received for 3 I0512 10:48:43.910099 7 log.go:172] (0xc00092cbb0) (0xc00111e460) Create stream I0512 10:48:43.910123 7 log.go:172] (0xc00092cbb0) (0xc00111e460) Stream added, broadcasting: 5 I0512 10:48:43.911186 7 log.go:172] (0xc00092cbb0) Reply frame received for 5 I0512 10:48:43.970986 7 log.go:172] (0xc00092cbb0) Data frame received for 3 I0512 10:48:43.971009 7 log.go:172] (0xc00111e3c0) (3) Data frame handling I0512 10:48:43.971030 7 log.go:172] (0xc00111e3c0) (3) Data frame sent I0512 10:48:43.971452 7 log.go:172] (0xc00092cbb0) Data frame received for 3 I0512 10:48:43.971464 7 log.go:172] (0xc00111e3c0) (3) Data frame handling I0512 10:48:43.971481 7 log.go:172] (0xc00092cbb0) Data frame received for 5 I0512 10:48:43.971488 7 log.go:172] (0xc00111e460) (5) Data frame handling I0512 10:48:43.973072 7 log.go:172] (0xc00092cbb0) Data frame received for 1 I0512 10:48:43.973100 7 log.go:172] (0xc00111e000) (1) Data frame handling I0512 10:48:43.973259 7 log.go:172] (0xc00111e000) (1) Data frame sent I0512 10:48:43.973326 7 log.go:172] (0xc00092cbb0) (0xc00111e000) Stream removed, broadcasting: 1 I0512 10:48:43.973354 7 log.go:172] (0xc00092cbb0) Go away received I0512 10:48:43.973453 7 log.go:172] (0xc00092cbb0) (0xc00111e000) Stream removed, broadcasting: 1 I0512 10:48:43.973479 7 log.go:172] (0xc00092cbb0) (0xc00111e3c0) Stream removed, broadcasting: 3 I0512 10:48:43.973495 7 log.go:172] (0xc00092cbb0) (0xc00111e460) Stream removed, broadcasting: 5 May 12 10:48:43.973: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:48:43.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5469" for this suite. • [SLOW TEST:25.895 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":97,"skipped":1554,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:48:43.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:48:51.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9729" for this suite. • [SLOW TEST:8.038 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":98,"skipped":1562,"failed":0} [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:48:52.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-59d9515a-c031-424a-8f4e-0b4d184f4554 in namespace container-probe-2036 May 12 10:48:59.173: INFO: Started pod test-webserver-59d9515a-c031-424a-8f4e-0b4d184f4554 in namespace container-probe-2036 STEP: checking the pod's current state and verifying that restartCount is present May 12 10:48:59.175: INFO: Initial restart count of pod test-webserver-59d9515a-c031-424a-8f4e-0b4d184f4554 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:53:01.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2036" for this suite. • [SLOW TEST:250.246 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":99,"skipped":1562,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:53:02.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 10:53:02.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-045d9638-3547-41a5-ba77-02fd8709d15e" in namespace "downward-api-2764" to be "Succeeded or Failed" May 12 10:53:02.950: INFO: Pod "downwardapi-volume-045d9638-3547-41a5-ba77-02fd8709d15e": Phase="Pending", Reason="", readiness=false. Elapsed: 183.648402ms May 12 10:53:04.953: INFO: Pod "downwardapi-volume-045d9638-3547-41a5-ba77-02fd8709d15e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187337442s May 12 10:53:06.958: INFO: Pod "downwardapi-volume-045d9638-3547-41a5-ba77-02fd8709d15e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192155553s May 12 10:53:08.963: INFO: Pod "downwardapi-volume-045d9638-3547-41a5-ba77-02fd8709d15e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.196944182s STEP: Saw pod success May 12 10:53:08.963: INFO: Pod "downwardapi-volume-045d9638-3547-41a5-ba77-02fd8709d15e" satisfied condition "Succeeded or Failed" May 12 10:53:08.967: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-045d9638-3547-41a5-ba77-02fd8709d15e container client-container: STEP: delete the pod May 12 10:53:09.034: INFO: Waiting for pod downwardapi-volume-045d9638-3547-41a5-ba77-02fd8709d15e to disappear May 12 10:53:09.042: INFO: Pod downwardapi-volume-045d9638-3547-41a5-ba77-02fd8709d15e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:53:09.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2764" for this suite. • [SLOW TEST:6.783 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":100,"skipped":1571,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:53:09.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 10:53:16.938: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:53:17.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2430" for this suite. • [SLOW TEST:7.958 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":101,"skipped":1591,"failed":0} SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:53:17.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-114c0800-4338-4d05-8989-0c8a63b52516 STEP: Creating secret with name s-test-opt-upd-c98efa8f-6d2f-43c8-84c9-743375b1f984 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-114c0800-4338-4d05-8989-0c8a63b52516 STEP: Updating secret s-test-opt-upd-c98efa8f-6d2f-43c8-84c9-743375b1f984 STEP: Creating secret with name s-test-opt-create-d473b02c-24ba-4469-808a-41404d7957ad STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:53:27.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2847" for this suite. • [SLOW TEST:10.241 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":102,"skipped":1594,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:53:27.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 10:53:27.387: INFO: Waiting up to 5m0s for pod "pod-9d3b5f85-da6f-480a-8a4c-050402d7f671" in namespace "emptydir-3676" to be "Succeeded or Failed" May 12 10:53:27.422: INFO: Pod "pod-9d3b5f85-da6f-480a-8a4c-050402d7f671": Phase="Pending", Reason="", readiness=false. Elapsed: 35.065754ms May 12 10:53:29.577: INFO: Pod "pod-9d3b5f85-da6f-480a-8a4c-050402d7f671": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190464916s May 12 10:53:31.582: INFO: Pod "pod-9d3b5f85-da6f-480a-8a4c-050402d7f671": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195183892s May 12 10:53:33.816: INFO: Pod "pod-9d3b5f85-da6f-480a-8a4c-050402d7f671": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.429776804s STEP: Saw pod success May 12 10:53:33.817: INFO: Pod "pod-9d3b5f85-da6f-480a-8a4c-050402d7f671" satisfied condition "Succeeded or Failed" May 12 10:53:33.990: INFO: Trying to get logs from node latest-worker2 pod pod-9d3b5f85-da6f-480a-8a4c-050402d7f671 container test-container: STEP: delete the pod May 12 10:53:34.646: INFO: Waiting for pod pod-9d3b5f85-da6f-480a-8a4c-050402d7f671 to disappear May 12 10:53:34.702: INFO: Pod pod-9d3b5f85-da6f-480a-8a4c-050402d7f671 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:53:34.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3676" for this suite. • [SLOW TEST:7.543 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":103,"skipped":1599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:53:34.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 12 10:53:36.141: INFO: Waiting up to 5m0s for pod "client-containers-f7611036-a2b0-440a-bde3-910058776868" in namespace "containers-9352" to be "Succeeded or Failed" May 12 10:53:36.271: INFO: Pod "client-containers-f7611036-a2b0-440a-bde3-910058776868": Phase="Pending", Reason="", readiness=false. Elapsed: 129.471594ms May 12 10:53:38.301: INFO: Pod "client-containers-f7611036-a2b0-440a-bde3-910058776868": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15993709s May 12 10:53:40.326: INFO: Pod "client-containers-f7611036-a2b0-440a-bde3-910058776868": Phase="Running", Reason="", readiness=true. Elapsed: 4.184496459s May 12 10:53:42.330: INFO: Pod "client-containers-f7611036-a2b0-440a-bde3-910058776868": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.189207119s STEP: Saw pod success May 12 10:53:42.331: INFO: Pod "client-containers-f7611036-a2b0-440a-bde3-910058776868" satisfied condition "Succeeded or Failed" May 12 10:53:42.334: INFO: Trying to get logs from node latest-worker2 pod client-containers-f7611036-a2b0-440a-bde3-910058776868 container test-container: STEP: delete the pod May 12 10:53:42.376: INFO: Waiting for pod client-containers-f7611036-a2b0-440a-bde3-910058776868 to disappear May 12 10:53:42.391: INFO: Pod client-containers-f7611036-a2b0-440a-bde3-910058776868 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:53:42.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9352" for this suite. • [SLOW TEST:7.605 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":104,"skipped":1646,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:53:42.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8841 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 12 10:53:43.104: INFO: Found 0 stateful pods, waiting for 3 May 12 10:53:53.110: INFO: Found 2 stateful pods, waiting for 3 May 12 10:54:03.108: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:54:03.108: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:54:03.108: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 12 10:54:03.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8841 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 10:54:03.379: INFO: stderr: "I0512 10:54:03.249411 1848 log.go:172] (0xc00003af20) (0xc00042ba40) Create stream\nI0512 10:54:03.249479 1848 log.go:172] (0xc00003af20) (0xc00042ba40) Stream added, broadcasting: 1\nI0512 10:54:03.252096 1848 log.go:172] (0xc00003af20) Reply frame received for 1\nI0512 10:54:03.252151 1848 log.go:172] (0xc00003af20) (0xc00030be00) Create stream\nI0512 10:54:03.252176 1848 log.go:172] (0xc00003af20) (0xc00030be00) Stream added, broadcasting: 3\nI0512 10:54:03.253069 1848 log.go:172] (0xc00003af20) Reply frame received for 3\nI0512 10:54:03.253106 1848 log.go:172] (0xc00003af20) (0xc00023c460) Create stream\nI0512 10:54:03.253290 1848 log.go:172] (0xc00003af20) (0xc00023c460) Stream added, broadcasting: 5\nI0512 10:54:03.254091 1848 log.go:172] (0xc00003af20) Reply frame received for 5\nI0512 10:54:03.338722 1848 log.go:172] (0xc00003af20) Data frame received for 5\nI0512 10:54:03.338749 1848 log.go:172] (0xc00023c460) (5) Data frame handling\nI0512 10:54:03.338767 1848 log.go:172] (0xc00023c460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 10:54:03.371876 1848 log.go:172] (0xc00003af20) Data frame received for 3\nI0512 10:54:03.371912 1848 log.go:172] (0xc00030be00) (3) Data frame handling\nI0512 10:54:03.371935 1848 log.go:172] (0xc00030be00) (3) Data frame sent\nI0512 10:54:03.372127 1848 log.go:172] (0xc00003af20) Data frame received for 3\nI0512 10:54:03.372159 1848 log.go:172] (0xc00030be00) (3) Data frame handling\nI0512 10:54:03.372266 1848 log.go:172] (0xc00003af20) Data frame received for 5\nI0512 10:54:03.372308 1848 log.go:172] (0xc00023c460) (5) Data frame handling\nI0512 10:54:03.374084 1848 log.go:172] (0xc00003af20) Data frame received for 1\nI0512 10:54:03.374127 1848 log.go:172] (0xc00042ba40) (1) Data frame handling\nI0512 10:54:03.374162 1848 log.go:172] (0xc00042ba40) (1) Data frame sent\nI0512 10:54:03.374187 1848 log.go:172] (0xc00003af20) (0xc00042ba40) Stream removed, broadcasting: 1\nI0512 10:54:03.374226 1848 log.go:172] (0xc00003af20) Go away received\nI0512 10:54:03.374803 1848 log.go:172] (0xc00003af20) (0xc00042ba40) Stream removed, broadcasting: 1\nI0512 10:54:03.374823 1848 log.go:172] (0xc00003af20) (0xc00030be00) Stream removed, broadcasting: 3\nI0512 10:54:03.374844 1848 log.go:172] (0xc00003af20) (0xc00023c460) Stream removed, broadcasting: 5\n" May 12 10:54:03.379: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 10:54:03.379: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 12 10:54:13.407: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 12 10:54:23.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8841 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 10:54:23.684: INFO: stderr: "I0512 10:54:23.584346 1866 log.go:172] (0xc0000e8370) (0xc0005872c0) Create stream\nI0512 10:54:23.584412 1866 log.go:172] (0xc0000e8370) (0xc0005872c0) Stream added, broadcasting: 1\nI0512 10:54:23.587145 1866 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0512 10:54:23.587190 1866 log.go:172] (0xc0000e8370) (0xc000538e60) Create stream\nI0512 10:54:23.587205 1866 log.go:172] (0xc0000e8370) (0xc000538e60) Stream added, broadcasting: 3\nI0512 10:54:23.587846 1866 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0512 10:54:23.587872 1866 log.go:172] (0xc0000e8370) (0xc00035b360) Create stream\nI0512 10:54:23.587881 1866 log.go:172] (0xc0000e8370) (0xc00035b360) Stream added, broadcasting: 5\nI0512 10:54:23.588555 1866 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0512 10:54:23.679504 1866 log.go:172] (0xc0000e8370) Data frame received for 3\nI0512 10:54:23.679523 1866 log.go:172] (0xc000538e60) (3) Data frame handling\nI0512 10:54:23.679530 1866 log.go:172] (0xc000538e60) (3) Data frame sent\nI0512 10:54:23.679553 1866 log.go:172] (0xc0000e8370) Data frame received for 5\nI0512 10:54:23.679580 1866 log.go:172] (0xc00035b360) (5) Data frame handling\nI0512 10:54:23.679589 1866 log.go:172] (0xc00035b360) (5) Data frame sent\nI0512 10:54:23.679598 1866 log.go:172] (0xc0000e8370) Data frame received for 5\nI0512 10:54:23.679604 1866 log.go:172] (0xc00035b360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 10:54:23.679621 1866 log.go:172] (0xc0000e8370) Data frame received for 3\nI0512 10:54:23.679647 1866 log.go:172] (0xc000538e60) (3) Data frame handling\nI0512 10:54:23.680813 1866 log.go:172] (0xc0000e8370) Data frame received for 1\nI0512 10:54:23.680828 1866 log.go:172] (0xc0005872c0) (1) Data frame handling\nI0512 10:54:23.680847 1866 log.go:172] (0xc0005872c0) (1) Data frame sent\nI0512 10:54:23.680862 1866 log.go:172] (0xc0000e8370) (0xc0005872c0) Stream removed, broadcasting: 1\nI0512 10:54:23.680926 1866 log.go:172] (0xc0000e8370) Go away received\nI0512 10:54:23.681241 1866 log.go:172] (0xc0000e8370) (0xc0005872c0) Stream removed, broadcasting: 1\nI0512 10:54:23.681263 1866 log.go:172] (0xc0000e8370) (0xc000538e60) Stream removed, broadcasting: 3\nI0512 10:54:23.681273 1866 log.go:172] (0xc0000e8370) (0xc00035b360) Stream removed, broadcasting: 5\n" May 12 10:54:23.684: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 10:54:23.684: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 10:54:33.810: INFO: Waiting for StatefulSet statefulset-8841/ss2 to complete update May 12 10:54:33.810: INFO: Waiting for Pod statefulset-8841/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 10:54:33.810: INFO: Waiting for Pod statefulset-8841/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 10:54:33.810: INFO: Waiting for Pod statefulset-8841/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 10:54:43.817: INFO: Waiting for StatefulSet statefulset-8841/ss2 to complete update May 12 10:54:43.817: INFO: Waiting for Pod statefulset-8841/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 10:54:43.817: INFO: Waiting for Pod statefulset-8841/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 10:54:54.326: INFO: Waiting for StatefulSet statefulset-8841/ss2 to complete update May 12 10:54:54.326: INFO: Waiting for Pod statefulset-8841/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 10:55:04.998: INFO: Waiting for StatefulSet statefulset-8841/ss2 to complete update STEP: Rolling back to a previous revision May 12 10:55:13.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8841 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 10:55:14.139: INFO: stderr: "I0512 10:55:13.965742 1885 log.go:172] (0xc0006208f0) (0xc0004fb2c0) Create stream\nI0512 10:55:13.965848 1885 log.go:172] (0xc0006208f0) (0xc0004fb2c0) Stream added, broadcasting: 1\nI0512 10:55:13.968867 1885 log.go:172] (0xc0006208f0) Reply frame received for 1\nI0512 10:55:13.968912 1885 log.go:172] (0xc0006208f0) (0xc000426e60) Create stream\nI0512 10:55:13.968926 1885 log.go:172] (0xc0006208f0) (0xc000426e60) Stream added, broadcasting: 3\nI0512 10:55:13.970198 1885 log.go:172] (0xc0006208f0) Reply frame received for 3\nI0512 10:55:13.970241 1885 log.go:172] (0xc0006208f0) (0xc000560640) Create stream\nI0512 10:55:13.970272 1885 log.go:172] (0xc0006208f0) (0xc000560640) Stream added, broadcasting: 5\nI0512 10:55:13.971301 1885 log.go:172] (0xc0006208f0) Reply frame received for 5\nI0512 10:55:14.053083 1885 log.go:172] (0xc0006208f0) Data frame received for 5\nI0512 10:55:14.053300 1885 log.go:172] (0xc000560640) (5) Data frame handling\nI0512 10:55:14.053347 1885 log.go:172] (0xc000560640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 10:55:14.130002 1885 log.go:172] (0xc0006208f0) Data frame received for 3\nI0512 10:55:14.130168 1885 log.go:172] (0xc000426e60) (3) Data frame handling\nI0512 10:55:14.130310 1885 log.go:172] (0xc000426e60) (3) Data frame sent\nI0512 10:55:14.130453 1885 log.go:172] (0xc0006208f0) Data frame received for 3\nI0512 10:55:14.130478 1885 log.go:172] (0xc000426e60) (3) Data frame handling\nI0512 10:55:14.130507 1885 log.go:172] (0xc0006208f0) Data frame received for 5\nI0512 10:55:14.130558 1885 log.go:172] (0xc000560640) (5) Data frame handling\nI0512 10:55:14.133497 1885 log.go:172] (0xc0006208f0) Data frame received for 1\nI0512 10:55:14.133621 1885 log.go:172] (0xc0004fb2c0) (1) Data frame handling\nI0512 10:55:14.133661 1885 log.go:172] (0xc0004fb2c0) (1) Data frame sent\nI0512 10:55:14.133753 1885 log.go:172] (0xc0006208f0) (0xc0004fb2c0) Stream removed, broadcasting: 1\nI0512 10:55:14.133962 1885 log.go:172] (0xc0006208f0) Go away received\nI0512 10:55:14.134331 1885 log.go:172] (0xc0006208f0) (0xc0004fb2c0) Stream removed, broadcasting: 1\nI0512 10:55:14.134374 1885 log.go:172] (0xc0006208f0) (0xc000426e60) Stream removed, broadcasting: 3\nI0512 10:55:14.134393 1885 log.go:172] (0xc0006208f0) (0xc000560640) Stream removed, broadcasting: 5\n" May 12 10:55:14.139: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 10:55:14.139: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 10:55:24.198: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 12 10:55:34.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8841 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 10:55:34.938: INFO: stderr: "I0512 10:55:34.817091 1905 log.go:172] (0xc000ba8fd0) (0xc0004f8320) Create stream\nI0512 10:55:34.817337 1905 log.go:172] (0xc000ba8fd0) (0xc0004f8320) Stream added, broadcasting: 1\nI0512 10:55:34.826589 1905 log.go:172] (0xc000ba8fd0) Reply frame received for 1\nI0512 10:55:34.826659 1905 log.go:172] (0xc000ba8fd0) (0xc000560320) Create stream\nI0512 10:55:34.826685 1905 log.go:172] (0xc000ba8fd0) (0xc000560320) Stream added, broadcasting: 3\nI0512 10:55:34.828040 1905 log.go:172] (0xc000ba8fd0) Reply frame received for 3\nI0512 10:55:34.828092 1905 log.go:172] (0xc000ba8fd0) (0xc000292460) Create stream\nI0512 10:55:34.828132 1905 log.go:172] (0xc000ba8fd0) (0xc000292460) Stream added, broadcasting: 5\nI0512 10:55:34.829966 1905 log.go:172] (0xc000ba8fd0) Reply frame received for 5\nI0512 10:55:34.932748 1905 log.go:172] (0xc000ba8fd0) Data frame received for 3\nI0512 10:55:34.932789 1905 log.go:172] (0xc000560320) (3) Data frame handling\nI0512 10:55:34.932807 1905 log.go:172] (0xc000560320) (3) Data frame sent\nI0512 10:55:34.932822 1905 log.go:172] (0xc000ba8fd0) Data frame received for 3\nI0512 10:55:34.932836 1905 log.go:172] (0xc000560320) (3) Data frame handling\nI0512 10:55:34.932860 1905 log.go:172] (0xc000ba8fd0) Data frame received for 5\nI0512 10:55:34.932875 1905 log.go:172] (0xc000292460) (5) Data frame handling\nI0512 10:55:34.932891 1905 log.go:172] (0xc000292460) (5) Data frame sent\nI0512 10:55:34.932905 1905 log.go:172] (0xc000ba8fd0) Data frame received for 5\nI0512 10:55:34.932917 1905 log.go:172] (0xc000292460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 10:55:34.934007 1905 log.go:172] (0xc000ba8fd0) Data frame received for 1\nI0512 10:55:34.934027 1905 log.go:172] (0xc0004f8320) (1) Data frame handling\nI0512 10:55:34.934041 1905 log.go:172] (0xc0004f8320) (1) Data frame sent\nI0512 10:55:34.934051 1905 log.go:172] (0xc000ba8fd0) (0xc0004f8320) Stream removed, broadcasting: 1\nI0512 10:55:34.934085 1905 log.go:172] (0xc000ba8fd0) Go away received\nI0512 10:55:34.934303 1905 log.go:172] (0xc000ba8fd0) (0xc0004f8320) Stream removed, broadcasting: 1\nI0512 10:55:34.934318 1905 log.go:172] (0xc000ba8fd0) (0xc000560320) Stream removed, broadcasting: 3\nI0512 10:55:34.934326 1905 log.go:172] (0xc000ba8fd0) (0xc000292460) Stream removed, broadcasting: 5\n" May 12 10:55:34.938: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 10:55:34.938: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 10:55:45.905: INFO: Waiting for StatefulSet statefulset-8841/ss2 to complete update May 12 10:55:45.905: INFO: Waiting for Pod statefulset-8841/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 12 10:55:45.905: INFO: Waiting for Pod statefulset-8841/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 12 10:55:55.982: INFO: Waiting for StatefulSet statefulset-8841/ss2 to complete update May 12 10:55:55.982: INFO: Waiting for Pod statefulset-8841/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 12 10:55:55.982: INFO: Waiting for Pod statefulset-8841/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 12 10:56:05.912: INFO: Waiting for StatefulSet statefulset-8841/ss2 to complete update May 12 10:56:05.912: INFO: Waiting for Pod statefulset-8841/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 12 10:56:15.913: INFO: Waiting for StatefulSet statefulset-8841/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 12 10:56:25.911: INFO: Deleting all statefulset in ns statefulset-8841 May 12 10:56:25.913: INFO: Scaling statefulset ss2 to 0 May 12 10:56:55.995: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:56:55.998: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:56:56.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8841" for this suite. • [SLOW TEST:194.055 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":105,"skipped":1695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:56:56.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-kvhjs in namespace proxy-4187 I0512 10:56:57.595252 7 runners.go:190] Created replication controller with name: proxy-service-kvhjs, namespace: proxy-4187, replica count: 1 I0512 10:56:58.645680 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:56:59.645930 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:57:00.646159 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:57:01.646408 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:57:02.646639 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:57:03.646835 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:57:04.647055 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:57:05.647256 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:57:06.647454 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:57:07.647673 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:57:08.647923 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:57:09.648097 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:57:10.648332 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:57:11.648574 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:57:12.648762 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:57:13.648929 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:57:14.649307 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:57:15.649537 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:57:16.649793 7 runners.go:190] proxy-service-kvhjs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 10:57:16.653: INFO: setup took 19.251989323s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 12 10:57:16.661: INFO: (0) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 7.651611ms) May 12 10:57:16.661: INFO: (0) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname2/proxy/: bar (200; 7.775532ms) May 12 10:57:16.662: INFO: (0) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 8.196794ms) May 12 10:57:16.662: INFO: (0) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 8.270692ms) May 12 10:57:16.662: INFO: (0) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 8.391216ms) May 12 10:57:16.664: INFO: (0) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 10.681017ms) May 12 10:57:16.664: INFO: (0) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 10.713342ms) May 12 10:57:16.664: INFO: (0) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 10.739641ms) May 12 10:57:16.664: INFO: (0) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 10.963381ms) May 12 10:57:16.664: INFO: (0) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 10.914629ms) May 12 10:57:16.664: INFO: (0) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 10.821095ms) May 12 10:57:16.669: INFO: (0) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 15.484724ms) May 12 10:57:16.669: INFO: (0) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 15.477108ms) May 12 10:57:16.670: INFO: (0) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 15.856069ms) May 12 10:57:16.670: INFO: (0) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: ... (200; 5.34412ms) May 12 10:57:16.675: INFO: (1) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 5.41422ms) May 12 10:57:16.675: INFO: (1) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 5.418069ms) May 12 10:57:16.675: INFO: (1) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 5.356691ms) May 12 10:57:16.675: INFO: (1) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 5.492682ms) May 12 10:57:16.675: INFO: (1) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 5.546198ms) May 12 10:57:16.675: INFO: (1) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 5.498023ms) May 12 10:57:16.675: INFO: (1) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 5.562624ms) May 12 10:57:16.676: INFO: (1) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 5.692207ms) May 12 10:57:16.676: INFO: (1) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 5.857897ms) May 12 10:57:16.676: INFO: (1) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname2/proxy/: bar (200; 5.999537ms) May 12 10:57:16.680: INFO: (2) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test<... (200; 3.933899ms) May 12 10:57:16.680: INFO: (2) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 4.197577ms) May 12 10:57:16.681: INFO: (2) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 4.596989ms) May 12 10:57:16.681: INFO: (2) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 4.703035ms) May 12 10:57:16.681: INFO: (2) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 4.6687ms) May 12 10:57:16.681: INFO: (2) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 4.957386ms) May 12 10:57:16.682: INFO: (2) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 5.765323ms) May 12 10:57:16.682: INFO: (2) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 6.19976ms) May 12 10:57:16.682: INFO: (2) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 6.498842ms) May 12 10:57:16.683: INFO: (2) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 6.669073ms) May 12 10:57:16.683: INFO: (2) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname2/proxy/: bar (200; 6.608895ms) May 12 10:57:16.683: INFO: (2) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 6.652215ms) May 12 10:57:16.685: INFO: (3) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 2.192792ms) May 12 10:57:16.686: INFO: (3) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 3.695583ms) May 12 10:57:16.686: INFO: (3) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 3.730167ms) May 12 10:57:16.686: INFO: (3) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 3.715129ms) May 12 10:57:16.687: INFO: (3) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 3.78501ms) May 12 10:57:16.687: INFO: (3) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 3.836394ms) May 12 10:57:16.687: INFO: (3) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 4.049185ms) May 12 10:57:16.687: INFO: (3) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 4.198832ms) May 12 10:57:16.687: INFO: (3) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test<... (200; 4.704287ms) May 12 10:57:16.688: INFO: (3) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 5.519736ms) May 12 10:57:16.688: INFO: (3) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname2/proxy/: bar (200; 5.614463ms) May 12 10:57:16.688: INFO: (3) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 5.606168ms) May 12 10:57:16.688: INFO: (3) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 5.691356ms) May 12 10:57:16.688: INFO: (3) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 5.643076ms) May 12 10:57:16.689: INFO: (3) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 5.816597ms) May 12 10:57:16.691: INFO: (4) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 2.558971ms) May 12 10:57:16.691: INFO: (4) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 2.625602ms) May 12 10:57:16.694: INFO: (4) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 4.683882ms) May 12 10:57:16.694: INFO: (4) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 4.939019ms) May 12 10:57:16.694: INFO: (4) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 4.935031ms) May 12 10:57:16.694: INFO: (4) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 5.254254ms) May 12 10:57:16.694: INFO: (4) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 5.496734ms) May 12 10:57:16.694: INFO: (4) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 5.542243ms) May 12 10:57:16.694: INFO: (4) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname2/proxy/: bar (200; 5.637421ms) May 12 10:57:16.694: INFO: (4) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 5.786242ms) May 12 10:57:16.695: INFO: (4) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 5.816234ms) May 12 10:57:16.695: INFO: (4) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 5.827187ms) May 12 10:57:16.695: INFO: (4) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 5.910386ms) May 12 10:57:16.695: INFO: (4) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 5.842463ms) May 12 10:57:16.695: INFO: (4) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 5.885769ms) May 12 10:57:16.695: INFO: (4) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test<... (200; 4.822837ms) May 12 10:57:16.700: INFO: (5) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 5.078704ms) May 12 10:57:16.701: INFO: (5) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname2/proxy/: bar (200; 5.80102ms) May 12 10:57:16.701: INFO: (5) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 5.809961ms) May 12 10:57:16.701: INFO: (5) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 5.799537ms) May 12 10:57:16.701: INFO: (5) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 5.825344ms) May 12 10:57:16.701: INFO: (5) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 5.903035ms) May 12 10:57:16.701: INFO: (5) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 6.015553ms) May 12 10:57:16.701: INFO: (5) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 6.172221ms) May 12 10:57:16.701: INFO: (5) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 6.296854ms) May 12 10:57:16.702: INFO: (5) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 6.411771ms) May 12 10:57:16.703: INFO: (5) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 7.54358ms) May 12 10:57:16.703: INFO: (5) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 7.607854ms) May 12 10:57:16.703: INFO: (5) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 7.810618ms) May 12 10:57:16.703: INFO: (5) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 7.847954ms) May 12 10:57:16.705: INFO: (6) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 2.326111ms) May 12 10:57:16.707: INFO: (6) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 4.296039ms) May 12 10:57:16.707: INFO: (6) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 4.31477ms) May 12 10:57:16.708: INFO: (6) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 4.934073ms) May 12 10:57:16.708: INFO: (6) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 5.057779ms) May 12 10:57:16.708: INFO: (6) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 5.196131ms) May 12 10:57:16.708: INFO: (6) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 5.217417ms) May 12 10:57:16.708: INFO: (6) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 5.238172ms) May 12 10:57:16.708: INFO: (6) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 5.28544ms) May 12 10:57:16.708: INFO: (6) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 5.374131ms) May 12 10:57:16.709: INFO: (6) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 5.919771ms) May 12 10:57:16.709: INFO: (6) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 5.891418ms) May 12 10:57:16.709: INFO: (6) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 5.969228ms) May 12 10:57:16.709: INFO: (6) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test<... (200; 5.862411ms) May 12 10:57:16.715: INFO: (7) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 5.857759ms) May 12 10:57:16.715: INFO: (7) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 5.913146ms) May 12 10:57:16.715: INFO: (7) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 5.985481ms) May 12 10:57:16.715: INFO: (7) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 6.26581ms) May 12 10:57:16.716: INFO: (7) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 6.264531ms) May 12 10:57:16.716: INFO: (7) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 6.32066ms) May 12 10:57:16.716: INFO: (7) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: ... (200; 3.277478ms) May 12 10:57:16.719: INFO: (8) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 3.159971ms) May 12 10:57:16.722: INFO: (8) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 6.081699ms) May 12 10:57:16.722: INFO: (8) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 6.258548ms) May 12 10:57:16.722: INFO: (8) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 6.399206ms) May 12 10:57:16.722: INFO: (8) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 6.310276ms) May 12 10:57:16.722: INFO: (8) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 6.389485ms) May 12 10:57:16.722: INFO: (8) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 6.723136ms) May 12 10:57:16.722: INFO: (8) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 6.806682ms) May 12 10:57:16.722: INFO: (8) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test (200; 6.74382ms) May 12 10:57:16.723: INFO: (8) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 6.792559ms) May 12 10:57:16.723: INFO: (8) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 6.651413ms) May 12 10:57:16.727: INFO: (9) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname2/proxy/: bar (200; 3.847436ms) May 12 10:57:16.727: INFO: (9) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 3.921383ms) May 12 10:57:16.727: INFO: (9) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 3.88596ms) May 12 10:57:16.728: INFO: (9) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 5.317281ms) May 12 10:57:16.728: INFO: (9) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 5.505789ms) May 12 10:57:16.728: INFO: (9) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 5.573576ms) May 12 10:57:16.728: INFO: (9) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 5.861478ms) May 12 10:57:16.729: INFO: (9) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 5.622982ms) May 12 10:57:16.729: INFO: (9) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 6.247375ms) May 12 10:57:16.729: INFO: (9) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 5.716508ms) May 12 10:57:16.729: INFO: (9) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 6.2816ms) May 12 10:57:16.729: INFO: (9) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 5.812801ms) May 12 10:57:16.729: INFO: (9) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 6.0555ms) May 12 10:57:16.729: INFO: (9) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 6.186662ms) May 12 10:57:16.729: INFO: (9) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test<... (200; 3.137618ms) May 12 10:57:16.733: INFO: (10) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 3.223604ms) May 12 10:57:16.733: INFO: (10) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 3.620969ms) May 12 10:57:16.733: INFO: (10) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 3.778266ms) May 12 10:57:16.733: INFO: (10) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname2/proxy/: bar (200; 3.799345ms) May 12 10:57:16.733: INFO: (10) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 3.670684ms) May 12 10:57:16.733: INFO: (10) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 3.65731ms) May 12 10:57:16.733: INFO: (10) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 3.806552ms) May 12 10:57:16.733: INFO: (10) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 3.805859ms) May 12 10:57:16.734: INFO: (10) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 4.614347ms) May 12 10:57:16.734: INFO: (10) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 4.741197ms) May 12 10:57:16.734: INFO: (10) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 4.835672ms) May 12 10:57:16.734: INFO: (10) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 4.951078ms) May 12 10:57:16.734: INFO: (10) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test (200; 5.2875ms) May 12 10:57:16.740: INFO: (11) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 5.385089ms) May 12 10:57:16.740: INFO: (11) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 5.34821ms) May 12 10:57:16.740: INFO: (11) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 5.365509ms) May 12 10:57:16.740: INFO: (11) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 5.407165ms) May 12 10:57:16.740: INFO: (11) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 5.757046ms) May 12 10:57:16.740: INFO: (11) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test (200; 8.304442ms) May 12 10:57:16.749: INFO: (12) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 8.388899ms) May 12 10:57:16.749: INFO: (12) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 8.388704ms) May 12 10:57:16.749: INFO: (12) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 8.378348ms) May 12 10:57:16.749: INFO: (12) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 8.478657ms) May 12 10:57:16.749: INFO: (12) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 8.511301ms) May 12 10:57:16.749: INFO: (12) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 8.453032ms) May 12 10:57:16.749: INFO: (12) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 8.45276ms) May 12 10:57:16.749: INFO: (12) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 8.547011ms) May 12 10:57:16.874: INFO: (13) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 123.55374ms) May 12 10:57:16.874: INFO: (13) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 123.766697ms) May 12 10:57:16.874: INFO: (13) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 124.027963ms) May 12 10:57:16.874: INFO: (13) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 124.218928ms) May 12 10:57:16.875: INFO: (13) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 124.852487ms) May 12 10:57:16.875: INFO: (13) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: ... (200; 126.133518ms) May 12 10:57:16.878: INFO: (13) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 127.499085ms) May 12 10:57:16.878: INFO: (13) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 127.832307ms) May 12 10:57:16.879: INFO: (13) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 127.950883ms) May 12 10:57:16.885: INFO: (14) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 6.592461ms) May 12 10:57:16.886: INFO: (14) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 6.828262ms) May 12 10:57:16.886: INFO: (14) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 6.869543ms) May 12 10:57:16.886: INFO: (14) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 7.430465ms) May 12 10:57:16.886: INFO: (14) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 7.587172ms) May 12 10:57:16.886: INFO: (14) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 7.589087ms) May 12 10:57:16.886: INFO: (14) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 7.695081ms) May 12 10:57:16.886: INFO: (14) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 7.564746ms) May 12 10:57:16.886: INFO: (14) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 7.525522ms) May 12 10:57:16.886: INFO: (14) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 7.812038ms) May 12 10:57:16.886: INFO: (14) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test (200; 4.728515ms) May 12 10:57:16.894: INFO: (15) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 4.794193ms) May 12 10:57:16.894: INFO: (15) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 4.751382ms) May 12 10:57:16.894: INFO: (15) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 5.092303ms) May 12 10:57:16.895: INFO: (15) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 5.317776ms) May 12 10:57:16.895: INFO: (15) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 5.607039ms) May 12 10:57:16.895: INFO: (15) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 5.906433ms) May 12 10:57:16.896: INFO: (15) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 6.671326ms) May 12 10:57:16.897: INFO: (15) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 7.720682ms) May 12 10:57:16.897: INFO: (15) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 7.923652ms) May 12 10:57:16.897: INFO: (15) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 7.923086ms) May 12 10:57:16.897: INFO: (15) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 7.965997ms) May 12 10:57:16.897: INFO: (15) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 7.98558ms) May 12 10:57:16.897: INFO: (15) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 7.936683ms) May 12 10:57:16.897: INFO: (15) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test (200; 6.076264ms) May 12 10:57:16.904: INFO: (16) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: ... (200; 6.663794ms) May 12 10:57:16.904: INFO: (16) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 6.597582ms) May 12 10:57:16.905: INFO: (16) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 7.321901ms) May 12 10:57:16.905: INFO: (16) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 7.299439ms) May 12 10:57:16.905: INFO: (16) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 7.26166ms) May 12 10:57:16.906: INFO: (16) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname2/proxy/: bar (200; 7.876685ms) May 12 10:57:16.906: INFO: (16) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 7.96028ms) May 12 10:57:16.906: INFO: (16) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 8.01809ms) May 12 10:57:16.906: INFO: (16) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 8.226737ms) May 12 10:57:16.906: INFO: (16) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 8.091053ms) May 12 10:57:16.908: INFO: (17) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 2.362751ms) May 12 10:57:16.909: INFO: (17) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 3.252869ms) May 12 10:57:16.909: INFO: (17) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 3.456663ms) May 12 10:57:16.909: INFO: (17) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 3.463468ms) May 12 10:57:16.911: INFO: (17) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 4.622372ms) May 12 10:57:16.911: INFO: (17) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 4.702751ms) May 12 10:57:16.911: INFO: (17) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test (200; 5.154014ms) May 12 10:57:16.911: INFO: (17) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 5.355208ms) May 12 10:57:16.914: INFO: (17) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 7.951967ms) May 12 10:57:16.914: INFO: (17) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 8.090533ms) May 12 10:57:16.914: INFO: (17) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 8.206485ms) May 12 10:57:16.914: INFO: (17) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 8.096503ms) May 12 10:57:16.914: INFO: (17) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 8.145609ms) May 12 10:57:16.914: INFO: (17) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname2/proxy/: bar (200; 8.168066ms) May 12 10:57:16.916: INFO: (18) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 2.012654ms) May 12 10:57:16.919: INFO: (18) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 4.220421ms) May 12 10:57:16.919: INFO: (18) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname1/proxy/: foo (200; 4.634499ms) May 12 10:57:16.919: INFO: (18) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname2/proxy/: bar (200; 4.048541ms) May 12 10:57:16.919: INFO: (18) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 4.421393ms) May 12 10:57:16.919: INFO: (18) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname2/proxy/: tls qux (200; 4.372714ms) May 12 10:57:16.919: INFO: (18) /api/v1/namespaces/proxy-4187/services/http:proxy-service-kvhjs:portname2/proxy/: bar (200; 4.844149ms) May 12 10:57:16.920: INFO: (18) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 4.430879ms) May 12 10:57:16.920: INFO: (18) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 4.620408ms) May 12 10:57:16.920: INFO: (18) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 4.18677ms) May 12 10:57:16.920: INFO: (18) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test<... (200; 5.468649ms) May 12 10:57:16.920: INFO: (18) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 5.084921ms) May 12 10:57:16.920: INFO: (18) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq/proxy/: test (200; 5.545705ms) May 12 10:57:16.920: INFO: (18) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 4.752412ms) May 12 10:57:16.923: INFO: (19) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 2.853061ms) May 12 10:57:16.923: INFO: (19) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:462/proxy/: tls qux (200; 2.82495ms) May 12 10:57:16.923: INFO: (19) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 2.972937ms) May 12 10:57:16.923: INFO: (19) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:460/proxy/: tls baz (200; 3.243965ms) May 12 10:57:16.923: INFO: (19) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:1080/proxy/: ... (200; 3.322833ms) May 12 10:57:16.926: INFO: (19) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:1080/proxy/: test<... (200; 5.477353ms) May 12 10:57:16.926: INFO: (19) /api/v1/namespaces/proxy-4187/pods/proxy-service-kvhjs-czrtq:160/proxy/: foo (200; 5.474626ms) May 12 10:57:16.926: INFO: (19) /api/v1/namespaces/proxy-4187/pods/http:proxy-service-kvhjs-czrtq:162/proxy/: bar (200; 5.45334ms) May 12 10:57:16.927: INFO: (19) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname2/proxy/: bar (200; 6.846478ms) May 12 10:57:16.927: INFO: (19) /api/v1/namespaces/proxy-4187/pods/https:proxy-service-kvhjs-czrtq:443/proxy/: test (200; 7.180647ms) May 12 10:57:16.927: INFO: (19) /api/v1/namespaces/proxy-4187/services/https:proxy-service-kvhjs:tlsportname1/proxy/: tls baz (200; 7.28111ms) May 12 10:57:16.927: INFO: (19) /api/v1/namespaces/proxy-4187/services/proxy-service-kvhjs:portname1/proxy/: foo (200; 7.270165ms) STEP: deleting ReplicationController proxy-service-kvhjs in namespace proxy-4187, will wait for the garbage collector to delete the pods May 12 10:57:16.987: INFO: Deleting ReplicationController proxy-service-kvhjs took: 7.367166ms May 12 10:57:17.387: INFO: Terminating ReplicationController proxy-service-kvhjs pods took: 400.217685ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:57:19.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4187" for this suite. • [SLOW TEST:23.635 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":106,"skipped":1722,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:57:20.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:57:29.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4883" for this suite. • [SLOW TEST:9.495 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":107,"skipped":1737,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:57:29.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:58:30.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4550" for this suite. • [SLOW TEST:61.313 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":108,"skipped":1740,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:58:30.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 10:58:31.602: INFO: Waiting up to 5m0s for pod "pod-3abbb6e7-d18e-479f-aa55-53097fae72c8" in namespace "emptydir-311" to be "Succeeded or Failed" May 12 10:58:31.969: INFO: Pod "pod-3abbb6e7-d18e-479f-aa55-53097fae72c8": Phase="Pending", Reason="", readiness=false. Elapsed: 366.733525ms May 12 10:58:34.060: INFO: Pod "pod-3abbb6e7-d18e-479f-aa55-53097fae72c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457179145s May 12 10:58:36.171: INFO: Pod "pod-3abbb6e7-d18e-479f-aa55-53097fae72c8": Phase="Running", Reason="", readiness=true. Elapsed: 4.568510013s May 12 10:58:38.174: INFO: Pod "pod-3abbb6e7-d18e-479f-aa55-53097fae72c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.571884965s STEP: Saw pod success May 12 10:58:38.174: INFO: Pod "pod-3abbb6e7-d18e-479f-aa55-53097fae72c8" satisfied condition "Succeeded or Failed" May 12 10:58:38.177: INFO: Trying to get logs from node latest-worker2 pod pod-3abbb6e7-d18e-479f-aa55-53097fae72c8 container test-container: STEP: delete the pod May 12 10:58:38.598: INFO: Waiting for pod pod-3abbb6e7-d18e-479f-aa55-53097fae72c8 to disappear May 12 10:58:38.922: INFO: Pod pod-3abbb6e7-d18e-479f-aa55-53097fae72c8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:58:38.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-311" for this suite. • [SLOW TEST:8.192 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":109,"skipped":1748,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:58:39.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a6636e73-a124-488e-a02a-187f525d09c4 STEP: Creating a pod to test consume secrets May 12 10:58:41.012: INFO: Waiting up to 5m0s for pod "pod-secrets-a49f0e4e-1508-4f3e-89fc-eebc7f04a1cf" in namespace "secrets-2428" to be "Succeeded or Failed" May 12 10:58:41.305: INFO: Pod "pod-secrets-a49f0e4e-1508-4f3e-89fc-eebc7f04a1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 293.317423ms May 12 10:58:43.431: INFO: Pod "pod-secrets-a49f0e4e-1508-4f3e-89fc-eebc7f04a1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.419317964s May 12 10:58:45.549: INFO: Pod "pod-secrets-a49f0e4e-1508-4f3e-89fc-eebc7f04a1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537111362s May 12 10:58:47.552: INFO: Pod "pod-secrets-a49f0e4e-1508-4f3e-89fc-eebc7f04a1cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.540231326s STEP: Saw pod success May 12 10:58:47.552: INFO: Pod "pod-secrets-a49f0e4e-1508-4f3e-89fc-eebc7f04a1cf" satisfied condition "Succeeded or Failed" May 12 10:58:47.555: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a49f0e4e-1508-4f3e-89fc-eebc7f04a1cf container secret-volume-test: STEP: delete the pod May 12 10:58:47.778: INFO: Waiting for pod pod-secrets-a49f0e4e-1508-4f3e-89fc-eebc7f04a1cf to disappear May 12 10:58:48.095: INFO: Pod pod-secrets-a49f0e4e-1508-4f3e-89fc-eebc7f04a1cf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:58:48.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2428" for this suite. STEP: Destroying namespace "secret-namespace-1413" for this suite. • [SLOW TEST:9.071 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":110,"skipped":1757,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:58:48.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:58:48.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7622" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":111,"skipped":1758,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:58:48.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:59:00.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4442" for this suite. • [SLOW TEST:12.204 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":112,"skipped":1770,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:59:00.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 10:59:02.407: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-d667a2e5-a2b4-45fc-817b-e7c48aa64caf" in namespace "security-context-test-705" to be "Succeeded or Failed" May 12 10:59:02.748: INFO: Pod "busybox-privileged-false-d667a2e5-a2b4-45fc-817b-e7c48aa64caf": Phase="Pending", Reason="", readiness=false. Elapsed: 341.651268ms May 12 10:59:04.752: INFO: Pod "busybox-privileged-false-d667a2e5-a2b4-45fc-817b-e7c48aa64caf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.345095008s May 12 10:59:06.756: INFO: Pod "busybox-privileged-false-d667a2e5-a2b4-45fc-817b-e7c48aa64caf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34942746s May 12 10:59:08.835: INFO: Pod "busybox-privileged-false-d667a2e5-a2b4-45fc-817b-e7c48aa64caf": Phase="Running", Reason="", readiness=true. Elapsed: 6.428609167s May 12 10:59:10.839: INFO: Pod "busybox-privileged-false-d667a2e5-a2b4-45fc-817b-e7c48aa64caf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.432781837s May 12 10:59:10.839: INFO: Pod "busybox-privileged-false-d667a2e5-a2b4-45fc-817b-e7c48aa64caf" satisfied condition "Succeeded or Failed" May 12 10:59:10.898: INFO: Got logs for pod "busybox-privileged-false-d667a2e5-a2b4-45fc-817b-e7c48aa64caf": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:59:10.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-705" for this suite. • [SLOW TEST:10.207 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":113,"skipped":1791,"failed":0} SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:59:10.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 12 10:59:19.927: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1778 pod-service-account-1eef8625-7a74-429c-8e2e-f4c5cd3ef163 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 12 10:59:27.892: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1778 pod-service-account-1eef8625-7a74-429c-8e2e-f4c5cd3ef163 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 12 10:59:29.437: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1778 pod-service-account-1eef8625-7a74-429c-8e2e-f4c5cd3ef163 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 10:59:30.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1778" for this suite. • [SLOW TEST:19.229 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":114,"skipped":1800,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 10:59:30.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-2ff94c6b-e868-4835-95d7-1c5322817e70 in namespace container-probe-5203 May 12 10:59:38.996: INFO: Started pod busybox-2ff94c6b-e868-4835-95d7-1c5322817e70 in namespace container-probe-5203 STEP: checking the pod's current state and verifying that restartCount is present May 12 10:59:39.089: INFO: Initial restart count of pod busybox-2ff94c6b-e868-4835-95d7-1c5322817e70 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:03:39.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5203" for this suite. • [SLOW TEST:249.903 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":115,"skipped":1818,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:03:40.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-8dc34485-d1d6-43a2-b576-186d6e0e4e6a STEP: Creating a pod to test consume configMaps May 12 11:03:40.755: INFO: Waiting up to 5m0s for pod "pod-configmaps-c37ef0d0-6aa7-4586-8f52-cdfac06cbe97" in namespace "configmap-4939" to be "Succeeded or Failed" May 12 11:03:40.917: INFO: Pod "pod-configmaps-c37ef0d0-6aa7-4586-8f52-cdfac06cbe97": Phase="Pending", Reason="", readiness=false. Elapsed: 162.062611ms May 12 11:03:43.248: INFO: Pod "pod-configmaps-c37ef0d0-6aa7-4586-8f52-cdfac06cbe97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492524165s May 12 11:03:45.792: INFO: Pod "pod-configmaps-c37ef0d0-6aa7-4586-8f52-cdfac06cbe97": Phase="Pending", Reason="", readiness=false. Elapsed: 5.036553505s May 12 11:03:47.972: INFO: Pod "pod-configmaps-c37ef0d0-6aa7-4586-8f52-cdfac06cbe97": Phase="Running", Reason="", readiness=true. Elapsed: 7.216768184s May 12 11:03:49.975: INFO: Pod "pod-configmaps-c37ef0d0-6aa7-4586-8f52-cdfac06cbe97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.21993798s STEP: Saw pod success May 12 11:03:49.975: INFO: Pod "pod-configmaps-c37ef0d0-6aa7-4586-8f52-cdfac06cbe97" satisfied condition "Succeeded or Failed" May 12 11:03:49.979: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c37ef0d0-6aa7-4586-8f52-cdfac06cbe97 container configmap-volume-test: STEP: delete the pod May 12 11:03:50.051: INFO: Waiting for pod pod-configmaps-c37ef0d0-6aa7-4586-8f52-cdfac06cbe97 to disappear May 12 11:03:50.067: INFO: Pod pod-configmaps-c37ef0d0-6aa7-4586-8f52-cdfac06cbe97 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:03:50.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4939" for this suite. • [SLOW TEST:10.055 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":116,"skipped":1824,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:03:50.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0512 11:04:00.668106 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 11:04:00.668: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:04:00.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7518" for this suite. • [SLOW TEST:10.577 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":117,"skipped":1825,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:04:00.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 12 11:04:00.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6751' May 12 11:04:01.079: INFO: stderr: "" May 12 11:04:01.079: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 12 11:04:06.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6751 -o json' May 12 11:04:06.282: INFO: stderr: "" May 12 11:04:06.282: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-12T11:04:01Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-12T11:04:01Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.241\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-12T11:04:05Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6751\",\n \"resourceVersion\": \"3787955\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6751/pods/e2e-test-httpd-pod\",\n \"uid\": \"f12f1dc5-3033-4ec3-8590-ba3b06a2773a\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-btv7j\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-btv7j\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-btv7j\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T11:04:01Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T11:04:05Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T11:04:05Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T11:04:01Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://ff2c5135ffae7340df6bedce7a4859daf6c6ee718d80d07059896e302c7f0cda\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-12T11:04:04Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.241\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.241\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-12T11:04:01Z\"\n }\n}\n" STEP: replace the image in the pod May 12 11:04:06.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6751' May 12 11:04:06.612: INFO: stderr: "" May 12 11:04:06.612: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 12 11:04:06.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6751' May 12 11:04:23.566: INFO: stderr: "" May 12 11:04:23.566: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:04:23.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6751" for this suite. • [SLOW TEST:22.902 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":118,"skipped":1844,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:04:23.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-233 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 12 11:04:24.854: INFO: Found 0 stateful pods, waiting for 3 May 12 11:04:34.982: INFO: Found 2 stateful pods, waiting for 3 May 12 11:04:44.890: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 11:04:44.890: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 11:04:44.890: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 11:04:54.858: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 11:04:54.858: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 11:04:54.858: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 12 11:04:54.882: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 12 11:05:05.044: INFO: Updating stateful set ss2 May 12 11:05:05.512: INFO: Waiting for Pod statefulset-233/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 12 11:05:26.902: INFO: Found 2 stateful pods, waiting for 3 May 12 11:05:36.907: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 11:05:36.907: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 11:05:36.907: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 11:05:46.906: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 11:05:46.906: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 11:05:46.906: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 12 11:05:46.926: INFO: Updating stateful set ss2 May 12 11:05:47.003: INFO: Waiting for Pod statefulset-233/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 11:05:57.283: INFO: Updating stateful set ss2 May 12 11:05:57.727: INFO: Waiting for StatefulSet statefulset-233/ss2 to complete update May 12 11:05:57.727: INFO: Waiting for Pod statefulset-233/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 11:06:07.736: INFO: Waiting for StatefulSet statefulset-233/ss2 to complete update May 12 11:06:07.736: INFO: Waiting for Pod statefulset-233/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 11:06:17.742: INFO: Waiting for StatefulSet statefulset-233/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 12 11:06:27.734: INFO: Deleting all statefulset in ns statefulset-233 May 12 11:06:27.736: INFO: Scaling statefulset ss2 to 0 May 12 11:06:57.780: INFO: Waiting for statefulset status.replicas updated to 0 May 12 11:06:57.782: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:06:58.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-233" for this suite. • [SLOW TEST:154.430 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":119,"skipped":1865,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:06:58.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 11:06:59.623: INFO: Waiting up to 5m0s for pod "pod-2572143a-8c58-443b-89fc-6c27085c9b29" in namespace "emptydir-9196" to be "Succeeded or Failed" May 12 11:06:59.812: INFO: Pod "pod-2572143a-8c58-443b-89fc-6c27085c9b29": Phase="Pending", Reason="", readiness=false. Elapsed: 188.609204ms May 12 11:07:02.105: INFO: Pod "pod-2572143a-8c58-443b-89fc-6c27085c9b29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.48232056s May 12 11:07:04.207: INFO: Pod "pod-2572143a-8c58-443b-89fc-6c27085c9b29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.58372893s May 12 11:07:06.296: INFO: Pod "pod-2572143a-8c58-443b-89fc-6c27085c9b29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.67328155s May 12 11:07:08.333: INFO: Pod "pod-2572143a-8c58-443b-89fc-6c27085c9b29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.70948971s STEP: Saw pod success May 12 11:07:08.333: INFO: Pod "pod-2572143a-8c58-443b-89fc-6c27085c9b29" satisfied condition "Succeeded or Failed" May 12 11:07:08.495: INFO: Trying to get logs from node latest-worker2 pod pod-2572143a-8c58-443b-89fc-6c27085c9b29 container test-container: STEP: delete the pod May 12 11:07:08.686: INFO: Waiting for pod pod-2572143a-8c58-443b-89fc-6c27085c9b29 to disappear May 12 11:07:08.716: INFO: Pod pod-2572143a-8c58-443b-89fc-6c27085c9b29 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:07:08.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9196" for this suite. • [SLOW TEST:10.834 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":120,"skipped":1905,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:07:08.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 12 11:07:09.666: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 11:07:10.046: INFO: Waiting for terminating namespaces to be deleted... May 12 11:07:10.207: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 12 11:07:10.240: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 12 11:07:10.240: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 12 11:07:10.240: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 12 11:07:10.240: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 12 11:07:10.240: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 12 11:07:10.240: INFO: Container kindnet-cni ready: true, restart count 0 May 12 11:07:10.240: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 12 11:07:10.240: INFO: Container kube-proxy ready: true, restart count 0 May 12 11:07:10.240: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 12 11:07:10.420: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 12 11:07:10.420: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 12 11:07:10.420: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 12 11:07:10.420: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 12 11:07:10.420: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 12 11:07:10.420: INFO: Container kindnet-cni ready: true, restart count 0 May 12 11:07:10.420: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 12 11:07:10.420: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-830f9792-841b-4190-972b-e7151b0ef9d1 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-830f9792-841b-4190-972b-e7151b0ef9d1 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-830f9792-841b-4190-972b-e7151b0ef9d1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:07:48.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2643" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:39.963 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":121,"skipped":1925,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:07:48.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 12 11:07:48.876: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 11:07:48.982: INFO: Waiting for terminating namespaces to be deleted... May 12 11:07:49.088: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 12 11:07:49.096: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 12 11:07:49.096: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 12 11:07:49.096: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 12 11:07:49.096: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 12 11:07:49.096: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 12 11:07:49.096: INFO: Container kindnet-cni ready: true, restart count 0 May 12 11:07:49.096: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 12 11:07:49.096: INFO: Container kube-proxy ready: true, restart count 0 May 12 11:07:49.096: INFO: pod1 from sched-pred-2643 started at 2020-05-12 11:07:20 +0000 UTC (1 container statuses recorded) May 12 11:07:49.096: INFO: Container pod1 ready: true, restart count 0 May 12 11:07:49.096: INFO: pod2 from sched-pred-2643 started at 2020-05-12 11:07:26 +0000 UTC (1 container statuses recorded) May 12 11:07:49.096: INFO: Container pod2 ready: true, restart count 0 May 12 11:07:49.096: INFO: pod3 from sched-pred-2643 started at 2020-05-12 11:07:33 +0000 UTC (1 container statuses recorded) May 12 11:07:49.096: INFO: Container pod3 ready: true, restart count 0 May 12 11:07:49.096: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 12 11:07:49.101: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 12 11:07:49.101: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 12 11:07:49.101: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 12 11:07:49.101: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 12 11:07:49.101: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 12 11:07:49.101: INFO: Container kindnet-cni ready: true, restart count 0 May 12 11:07:49.101: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 12 11:07:49.102: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-45521f3d-f3a4-42f3-8044-15a9a9f2c501 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-45521f3d-f3a4-42f3-8044-15a9a9f2c501 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-45521f3d-f3a4-42f3-8044-15a9a9f2c501 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:13:02.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1371" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:313.954 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":122,"skipped":1931,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:13:02.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-de153d6d-410b-4dac-b6f4-5aeb781f006e STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-de153d6d-410b-4dac-b6f4-5aeb781f006e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:14:16.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7269" for this suite. • [SLOW TEST:73.693 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":123,"skipped":1954,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:14:16.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:14:33.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9951" for this suite. • [SLOW TEST:17.071 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":124,"skipped":2041,"failed":0} SSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:14:33.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 12 11:14:34.152: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 12 11:14:34.206: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 12 11:14:34.207: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 12 11:14:34.627: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 12 11:14:34.627: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 12 11:14:34.685: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 12 11:14:34.685: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 12 11:14:43.612: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:14:44.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-2832" for this suite. • [SLOW TEST:11.521 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":125,"skipped":2047,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:14:45.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 12 11:14:49.947: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:14:50.501: INFO: Number of nodes with available pods: 0 May 12 11:14:50.501: INFO: Node latest-worker is running more than one daemon pod May 12 11:14:52.335: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:14:52.590: INFO: Number of nodes with available pods: 0 May 12 11:14:52.590: INFO: Node latest-worker is running more than one daemon pod May 12 11:14:54.140: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:14:54.505: INFO: Number of nodes with available pods: 0 May 12 11:14:54.505: INFO: Node latest-worker is running more than one daemon pod May 12 11:14:55.655: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:14:56.044: INFO: Number of nodes with available pods: 0 May 12 11:14:56.044: INFO: Node latest-worker is running more than one daemon pod May 12 11:14:56.958: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:14:57.726: INFO: Number of nodes with available pods: 0 May 12 11:14:57.726: INFO: Node latest-worker is running more than one daemon pod May 12 11:14:58.908: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:14:59.554: INFO: Number of nodes with available pods: 2 May 12 11:14:59.554: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 12 11:15:01.094: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:01.106: INFO: Number of nodes with available pods: 1 May 12 11:15:01.106: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:02.387: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:02.393: INFO: Number of nodes with available pods: 1 May 12 11:15:02.393: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:03.111: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:03.114: INFO: Number of nodes with available pods: 1 May 12 11:15:03.114: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:04.110: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:04.113: INFO: Number of nodes with available pods: 1 May 12 11:15:04.113: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:05.111: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:05.115: INFO: Number of nodes with available pods: 1 May 12 11:15:05.115: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:06.111: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:06.116: INFO: Number of nodes with available pods: 1 May 12 11:15:06.116: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:07.415: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:07.418: INFO: Number of nodes with available pods: 1 May 12 11:15:07.418: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:08.247: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:08.250: INFO: Number of nodes with available pods: 1 May 12 11:15:08.250: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:09.110: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:09.113: INFO: Number of nodes with available pods: 1 May 12 11:15:09.113: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:10.111: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:10.115: INFO: Number of nodes with available pods: 1 May 12 11:15:10.115: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:11.111: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:11.114: INFO: Number of nodes with available pods: 1 May 12 11:15:11.114: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:12.350: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:12.577: INFO: Number of nodes with available pods: 1 May 12 11:15:12.577: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:13.112: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:13.116: INFO: Number of nodes with available pods: 1 May 12 11:15:13.116: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:14.110: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:14.113: INFO: Number of nodes with available pods: 1 May 12 11:15:14.113: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:15.111: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:15.114: INFO: Number of nodes with available pods: 1 May 12 11:15:15.114: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:16.109: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:16.111: INFO: Number of nodes with available pods: 1 May 12 11:15:16.112: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:17.111: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:17.115: INFO: Number of nodes with available pods: 1 May 12 11:15:17.115: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:18.141: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:18.144: INFO: Number of nodes with available pods: 1 May 12 11:15:18.144: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:19.196: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:19.208: INFO: Number of nodes with available pods: 1 May 12 11:15:19.208: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:20.110: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:20.112: INFO: Number of nodes with available pods: 1 May 12 11:15:20.112: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:15:21.110: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:15:21.114: INFO: Number of nodes with available pods: 2 May 12 11:15:21.114: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6482, will wait for the garbage collector to delete the pods May 12 11:15:21.322: INFO: Deleting DaemonSet.extensions daemon-set took: 152.311531ms May 12 11:15:21.822: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.214485ms May 12 11:15:35.366: INFO: Number of nodes with available pods: 0 May 12 11:15:35.366: INFO: Number of running nodes: 0, number of available pods: 0 May 12 11:15:35.368: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6482/daemonsets","resourceVersion":"3790457"},"items":null} May 12 11:15:35.370: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6482/pods","resourceVersion":"3790457"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:15:35.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6482" for this suite. • [SLOW TEST:50.334 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":126,"skipped":2058,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:15:35.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 12 11:15:35.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5561' May 12 11:15:41.157: INFO: stderr: "" May 12 11:15:41.157: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 11:15:41.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5561' May 12 11:15:41.371: INFO: stderr: "" May 12 11:15:41.371: INFO: stdout: "update-demo-nautilus-dqpct update-demo-nautilus-g7h7p " May 12 11:15:41.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dqpct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5561' May 12 11:15:41.479: INFO: stderr: "" May 12 11:15:41.479: INFO: stdout: "" May 12 11:15:41.479: INFO: update-demo-nautilus-dqpct is created but not running May 12 11:15:46.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5561' May 12 11:15:46.687: INFO: stderr: "" May 12 11:15:46.687: INFO: stdout: "update-demo-nautilus-dqpct update-demo-nautilus-g7h7p " May 12 11:15:46.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dqpct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5561' May 12 11:15:46.880: INFO: stderr: "" May 12 11:15:46.880: INFO: stdout: "" May 12 11:15:46.880: INFO: update-demo-nautilus-dqpct is created but not running May 12 11:15:51.880: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5561' May 12 11:15:52.294: INFO: stderr: "" May 12 11:15:52.294: INFO: stdout: "update-demo-nautilus-dqpct update-demo-nautilus-g7h7p " May 12 11:15:52.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dqpct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5561' May 12 11:15:52.686: INFO: stderr: "" May 12 11:15:52.686: INFO: stdout: "true" May 12 11:15:52.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dqpct -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5561' May 12 11:15:52.912: INFO: stderr: "" May 12 11:15:52.912: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 11:15:52.912: INFO: validating pod update-demo-nautilus-dqpct May 12 11:15:52.917: INFO: got data: { "image": "nautilus.jpg" } May 12 11:15:52.917: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 11:15:52.917: INFO: update-demo-nautilus-dqpct is verified up and running May 12 11:15:52.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7h7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5561' May 12 11:15:53.007: INFO: stderr: "" May 12 11:15:53.007: INFO: stdout: "true" May 12 11:15:53.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7h7p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5561' May 12 11:15:53.125: INFO: stderr: "" May 12 11:15:53.125: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 11:15:53.125: INFO: validating pod update-demo-nautilus-g7h7p May 12 11:15:53.130: INFO: got data: { "image": "nautilus.jpg" } May 12 11:15:53.130: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 11:15:53.130: INFO: update-demo-nautilus-g7h7p is verified up and running STEP: using delete to clean up resources May 12 11:15:53.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5561' May 12 11:15:53.660: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 11:15:53.660: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 11:15:53.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5561' May 12 11:15:54.987: INFO: stderr: "No resources found in kubectl-5561 namespace.\n" May 12 11:15:54.987: INFO: stdout: "" May 12 11:15:54.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5561 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 11:15:55.192: INFO: stderr: "" May 12 11:15:55.192: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:15:55.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5561" for this suite. • [SLOW TEST:20.640 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":127,"skipped":2084,"failed":0} SS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:15:56.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-321 May 12 11:16:01.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-321 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 12 11:16:01.825: INFO: stderr: "I0512 11:16:01.739995 2330 log.go:172] (0xc0009e0000) (0xc000133180) Create stream\nI0512 11:16:01.740049 2330 log.go:172] (0xc0009e0000) (0xc000133180) Stream added, broadcasting: 1\nI0512 11:16:01.742616 2330 log.go:172] (0xc0009e0000) Reply frame received for 1\nI0512 11:16:01.742658 2330 log.go:172] (0xc0009e0000) (0xc000910820) Create stream\nI0512 11:16:01.742669 2330 log.go:172] (0xc0009e0000) (0xc000910820) Stream added, broadcasting: 3\nI0512 11:16:01.743485 2330 log.go:172] (0xc0009e0000) Reply frame received for 3\nI0512 11:16:01.743523 2330 log.go:172] (0xc0009e0000) (0xc000133680) Create stream\nI0512 11:16:01.743552 2330 log.go:172] (0xc0009e0000) (0xc000133680) Stream added, broadcasting: 5\nI0512 11:16:01.744396 2330 log.go:172] (0xc0009e0000) Reply frame received for 5\nI0512 11:16:01.816029 2330 log.go:172] (0xc0009e0000) Data frame received for 5\nI0512 11:16:01.816062 2330 log.go:172] (0xc000133680) (5) Data frame handling\nI0512 11:16:01.816087 2330 log.go:172] (0xc000133680) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0512 11:16:01.818556 2330 log.go:172] (0xc0009e0000) Data frame received for 3\nI0512 11:16:01.818590 2330 log.go:172] (0xc000910820) (3) Data frame handling\nI0512 11:16:01.818630 2330 log.go:172] (0xc000910820) (3) Data frame sent\nI0512 11:16:01.819260 2330 log.go:172] (0xc0009e0000) Data frame received for 5\nI0512 11:16:01.819452 2330 log.go:172] (0xc000133680) (5) Data frame handling\nI0512 11:16:01.819490 2330 log.go:172] (0xc0009e0000) Data frame received for 3\nI0512 11:16:01.819512 2330 log.go:172] (0xc000910820) (3) Data frame handling\nI0512 11:16:01.821277 2330 log.go:172] (0xc0009e0000) Data frame received for 1\nI0512 11:16:01.821394 2330 log.go:172] (0xc000133180) (1) Data frame handling\nI0512 11:16:01.821441 2330 log.go:172] (0xc000133180) (1) Data frame sent\nI0512 11:16:01.821470 2330 log.go:172] (0xc0009e0000) (0xc000133180) Stream removed, broadcasting: 1\nI0512 11:16:01.821493 2330 log.go:172] (0xc0009e0000) Go away received\nI0512 11:16:01.821919 2330 log.go:172] (0xc0009e0000) (0xc000133180) Stream removed, broadcasting: 1\nI0512 11:16:01.821941 2330 log.go:172] (0xc0009e0000) (0xc000910820) Stream removed, broadcasting: 3\nI0512 11:16:01.821952 2330 log.go:172] (0xc0009e0000) (0xc000133680) Stream removed, broadcasting: 5\n" May 12 11:16:01.825: INFO: stdout: "iptables" May 12 11:16:01.825: INFO: proxyMode: iptables May 12 11:16:01.829: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 11:16:01.852: INFO: Pod kube-proxy-mode-detector still exists May 12 11:16:03.852: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 11:16:03.864: INFO: Pod kube-proxy-mode-detector still exists May 12 11:16:05.856: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 11:16:05.859: INFO: Pod kube-proxy-mode-detector still exists May 12 11:16:07.852: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 11:16:07.856: INFO: Pod kube-proxy-mode-detector still exists May 12 11:16:09.852: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 11:16:09.856: INFO: Pod kube-proxy-mode-detector still exists May 12 11:16:11.852: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 11:16:11.856: INFO: Pod kube-proxy-mode-detector still exists May 12 11:16:13.852: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 11:16:13.868: INFO: Pod kube-proxy-mode-detector still exists May 12 11:16:15.852: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 11:16:16.007: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-321 STEP: creating replication controller affinity-clusterip-timeout in namespace services-321 I0512 11:16:16.407702 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-321, replica count: 3 I0512 11:16:19.458147 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 11:16:22.458374 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 11:16:22.464: INFO: Creating new exec pod May 12 11:16:31.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-321 execpod-affinity9pjc4 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 12 11:16:31.917: INFO: stderr: "I0512 11:16:31.832680 2350 log.go:172] (0xc00003a370) (0xc00057a320) Create stream\nI0512 11:16:31.832749 2350 log.go:172] (0xc00003a370) (0xc00057a320) Stream added, broadcasting: 1\nI0512 11:16:31.847831 2350 log.go:172] (0xc00003a370) Reply frame received for 1\nI0512 11:16:31.847887 2350 log.go:172] (0xc00003a370) (0xc0003623c0) Create stream\nI0512 11:16:31.847900 2350 log.go:172] (0xc00003a370) (0xc0003623c0) Stream added, broadcasting: 3\nI0512 11:16:31.848658 2350 log.go:172] (0xc00003a370) Reply frame received for 3\nI0512 11:16:31.848687 2350 log.go:172] (0xc00003a370) (0xc00057af00) Create stream\nI0512 11:16:31.848699 2350 log.go:172] (0xc00003a370) (0xc00057af00) Stream added, broadcasting: 5\nI0512 11:16:31.849513 2350 log.go:172] (0xc00003a370) Reply frame received for 5\nI0512 11:16:31.911508 2350 log.go:172] (0xc00003a370) Data frame received for 5\nI0512 11:16:31.911532 2350 log.go:172] (0xc00057af00) (5) Data frame handling\nI0512 11:16:31.911550 2350 log.go:172] (0xc00057af00) (5) Data frame sent\nI0512 11:16:31.911561 2350 log.go:172] (0xc00003a370) Data frame received for 5\nI0512 11:16:31.911565 2350 log.go:172] (0xc00057af00) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0512 11:16:31.911657 2350 log.go:172] (0xc00003a370) Data frame received for 3\nI0512 11:16:31.911669 2350 log.go:172] (0xc0003623c0) (3) Data frame handling\nI0512 11:16:31.913671 2350 log.go:172] (0xc00003a370) Data frame received for 1\nI0512 11:16:31.913694 2350 log.go:172] (0xc00057a320) (1) Data frame handling\nI0512 11:16:31.913709 2350 log.go:172] (0xc00057a320) (1) Data frame sent\nI0512 11:16:31.913725 2350 log.go:172] (0xc00003a370) (0xc00057a320) Stream removed, broadcasting: 1\nI0512 11:16:31.913748 2350 log.go:172] (0xc00003a370) Go away received\nI0512 11:16:31.914063 2350 log.go:172] (0xc00003a370) (0xc00057a320) Stream removed, broadcasting: 1\nI0512 11:16:31.914078 2350 log.go:172] (0xc00003a370) (0xc0003623c0) Stream removed, broadcasting: 3\nI0512 11:16:31.914084 2350 log.go:172] (0xc00003a370) (0xc00057af00) Stream removed, broadcasting: 5\n" May 12 11:16:31.917: INFO: stdout: "" May 12 11:16:31.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-321 execpod-affinity9pjc4 -- /bin/sh -x -c nc -zv -t -w 2 10.99.148.218 80' May 12 11:16:32.134: INFO: stderr: "I0512 11:16:32.075692 2372 log.go:172] (0xc000b18bb0) (0xc0002448c0) Create stream\nI0512 11:16:32.075740 2372 log.go:172] (0xc000b18bb0) (0xc0002448c0) Stream added, broadcasting: 1\nI0512 11:16:32.077379 2372 log.go:172] (0xc000b18bb0) Reply frame received for 1\nI0512 11:16:32.077413 2372 log.go:172] (0xc000b18bb0) (0xc0004ca0a0) Create stream\nI0512 11:16:32.077425 2372 log.go:172] (0xc000b18bb0) (0xc0004ca0a0) Stream added, broadcasting: 3\nI0512 11:16:32.078057 2372 log.go:172] (0xc000b18bb0) Reply frame received for 3\nI0512 11:16:32.078081 2372 log.go:172] (0xc000b18bb0) (0xc000244f00) Create stream\nI0512 11:16:32.078089 2372 log.go:172] (0xc000b18bb0) (0xc000244f00) Stream added, broadcasting: 5\nI0512 11:16:32.078740 2372 log.go:172] (0xc000b18bb0) Reply frame received for 5\nI0512 11:16:32.127757 2372 log.go:172] (0xc000b18bb0) Data frame received for 3\nI0512 11:16:32.127785 2372 log.go:172] (0xc0004ca0a0) (3) Data frame handling\nI0512 11:16:32.127819 2372 log.go:172] (0xc000b18bb0) Data frame received for 5\nI0512 11:16:32.127854 2372 log.go:172] (0xc000244f00) (5) Data frame handling\nI0512 11:16:32.127920 2372 log.go:172] (0xc000244f00) (5) Data frame sent\nI0512 11:16:32.127941 2372 log.go:172] (0xc000b18bb0) Data frame received for 5\nI0512 11:16:32.127956 2372 log.go:172] (0xc000244f00) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.148.218 80\nConnection to 10.99.148.218 80 port [tcp/http] succeeded!\nI0512 11:16:32.129630 2372 log.go:172] (0xc000b18bb0) Data frame received for 1\nI0512 11:16:32.129645 2372 log.go:172] (0xc0002448c0) (1) Data frame handling\nI0512 11:16:32.129656 2372 log.go:172] (0xc0002448c0) (1) Data frame sent\nI0512 11:16:32.130020 2372 log.go:172] (0xc000b18bb0) (0xc0002448c0) Stream removed, broadcasting: 1\nI0512 11:16:32.130086 2372 log.go:172] (0xc000b18bb0) Go away received\nI0512 11:16:32.130647 2372 log.go:172] (0xc000b18bb0) (0xc0002448c0) Stream removed, broadcasting: 1\nI0512 11:16:32.130661 2372 log.go:172] (0xc000b18bb0) (0xc0004ca0a0) Stream removed, broadcasting: 3\nI0512 11:16:32.130667 2372 log.go:172] (0xc000b18bb0) (0xc000244f00) Stream removed, broadcasting: 5\n" May 12 11:16:32.135: INFO: stdout: "" May 12 11:16:32.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-321 execpod-affinity9pjc4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.99.148.218:80/ ; done' May 12 11:16:32.442: INFO: stderr: "I0512 11:16:32.273704 2388 log.go:172] (0xc000ba9130) (0xc000b403c0) Create stream\nI0512 11:16:32.273774 2388 log.go:172] (0xc000ba9130) (0xc000b403c0) Stream added, broadcasting: 1\nI0512 11:16:32.278037 2388 log.go:172] (0xc000ba9130) Reply frame received for 1\nI0512 11:16:32.278077 2388 log.go:172] (0xc000ba9130) (0xc00044adc0) Create stream\nI0512 11:16:32.278086 2388 log.go:172] (0xc000ba9130) (0xc00044adc0) Stream added, broadcasting: 3\nI0512 11:16:32.278796 2388 log.go:172] (0xc000ba9130) Reply frame received for 3\nI0512 11:16:32.278824 2388 log.go:172] (0xc000ba9130) (0xc0006766e0) Create stream\nI0512 11:16:32.278833 2388 log.go:172] (0xc000ba9130) (0xc0006766e0) Stream added, broadcasting: 5\nI0512 11:16:32.279603 2388 log.go:172] (0xc000ba9130) Reply frame received for 5\nI0512 11:16:32.353003 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.353028 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.353039 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.353072 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.353105 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.353258 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.358352 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.358365 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.358380 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.358750 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.358769 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.358781 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.358800 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.358812 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.358825 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.365271 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.365300 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.365319 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.365794 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.365821 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.365833 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\nI0512 11:16:32.365838 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.365843 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.365862 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\nI0512 11:16:32.365927 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.365941 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.365954 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.370057 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.370071 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.370079 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.370521 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.370540 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.370549 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.370562 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.370568 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.370582 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.373906 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.373928 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.373940 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.374374 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.374400 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.374413 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.374431 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.374443 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.374455 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.378959 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.378996 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.379012 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.379029 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.379049 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.379068 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.379136 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.379217 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.379300 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.383023 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.383052 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.383070 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.383296 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.383322 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.383361 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.383377 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.383399 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.383413 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.387215 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.387232 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.387248 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.387694 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.387711 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.387719 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\nI0512 11:16:32.387726 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.387732 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.387745 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.387757 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.387765 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.387784 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\nI0512 11:16:32.391212 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.391225 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.391236 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.391646 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.391673 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.391695 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.391720 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.391741 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.391766 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\nI0512 11:16:32.391786 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.391804 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.391836 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\nI0512 11:16:32.395552 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.395574 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.395596 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.395936 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.395962 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.395975 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.396191 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.396209 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.396228 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.400443 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.400459 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.400466 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.401048 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.401082 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.401302 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\nI0512 11:16:32.401327 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.401344 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.401380 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\nI0512 11:16:32.401482 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.401505 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.401524 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.406539 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.406576 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.406615 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.406939 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.406968 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.407003 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.407025 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\nI0512 11:16:32.407053 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.407070 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.412966 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.412989 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.413010 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.414445 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.414475 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.414484 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.414494 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.414500 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.414505 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.419406 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.419425 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.419443 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.420132 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.420155 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.420167 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.420188 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.420200 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.420209 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.423859 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.423882 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.423900 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.424270 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.424289 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.424298 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.424321 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.424343 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.424354 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.428457 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.428477 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.428493 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.428871 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.428882 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.428889 2388 log.go:172] (0xc0006766e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.429045 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.429064 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.429082 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.433865 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.433883 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.433899 2388 log.go:172] (0xc00044adc0) (3) Data frame sent\nI0512 11:16:32.434429 2388 log.go:172] (0xc000ba9130) Data frame received for 5\nI0512 11:16:32.434443 2388 log.go:172] (0xc0006766e0) (5) Data frame handling\nI0512 11:16:32.434648 2388 log.go:172] (0xc000ba9130) Data frame received for 3\nI0512 11:16:32.434671 2388 log.go:172] (0xc00044adc0) (3) Data frame handling\nI0512 11:16:32.436208 2388 log.go:172] (0xc000ba9130) Data frame received for 1\nI0512 11:16:32.436227 2388 log.go:172] (0xc000b403c0) (1) Data frame handling\nI0512 11:16:32.436243 2388 log.go:172] (0xc000b403c0) (1) Data frame sent\nI0512 11:16:32.436286 2388 log.go:172] (0xc000ba9130) (0xc000b403c0) Stream removed, broadcasting: 1\nI0512 11:16:32.436311 2388 log.go:172] (0xc000ba9130) Go away received\nI0512 11:16:32.436659 2388 log.go:172] (0xc000ba9130) (0xc000b403c0) Stream removed, broadcasting: 1\nI0512 11:16:32.436677 2388 log.go:172] (0xc000ba9130) (0xc00044adc0) Stream removed, broadcasting: 3\nI0512 11:16:32.436688 2388 log.go:172] (0xc000ba9130) (0xc0006766e0) Stream removed, broadcasting: 5\n" May 12 11:16:32.443: INFO: stdout: "\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n\naffinity-clusterip-timeout-v6w6n" May 12 11:16:32.443: INFO: Received response from host: May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Received response from host: affinity-clusterip-timeout-v6w6n May 12 11:16:32.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-321 execpod-affinity9pjc4 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.99.148.218:80/' May 12 11:16:32.942: INFO: stderr: "I0512 11:16:32.852058 2408 log.go:172] (0xc000c23550) (0xc0006f05a0) Create stream\nI0512 11:16:32.852113 2408 log.go:172] (0xc000c23550) (0xc0006f05a0) Stream added, broadcasting: 1\nI0512 11:16:32.854426 2408 log.go:172] (0xc000c23550) Reply frame received for 1\nI0512 11:16:32.854457 2408 log.go:172] (0xc000c23550) (0xc000223a40) Create stream\nI0512 11:16:32.854467 2408 log.go:172] (0xc000c23550) (0xc000223a40) Stream added, broadcasting: 3\nI0512 11:16:32.855264 2408 log.go:172] (0xc000c23550) Reply frame received for 3\nI0512 11:16:32.855288 2408 log.go:172] (0xc000c23550) (0xc0006f0f00) Create stream\nI0512 11:16:32.855294 2408 log.go:172] (0xc000c23550) (0xc0006f0f00) Stream added, broadcasting: 5\nI0512 11:16:32.855938 2408 log.go:172] (0xc000c23550) Reply frame received for 5\nI0512 11:16:32.930853 2408 log.go:172] (0xc000c23550) Data frame received for 5\nI0512 11:16:32.930882 2408 log.go:172] (0xc0006f0f00) (5) Data frame handling\nI0512 11:16:32.930899 2408 log.go:172] (0xc0006f0f00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:32.933491 2408 log.go:172] (0xc000c23550) Data frame received for 3\nI0512 11:16:32.933512 2408 log.go:172] (0xc000223a40) (3) Data frame handling\nI0512 11:16:32.933529 2408 log.go:172] (0xc000223a40) (3) Data frame sent\nI0512 11:16:32.933818 2408 log.go:172] (0xc000c23550) Data frame received for 5\nI0512 11:16:32.933832 2408 log.go:172] (0xc0006f0f00) (5) Data frame handling\nI0512 11:16:32.934132 2408 log.go:172] (0xc000c23550) Data frame received for 3\nI0512 11:16:32.934143 2408 log.go:172] (0xc000223a40) (3) Data frame handling\nI0512 11:16:32.936445 2408 log.go:172] (0xc000c23550) Data frame received for 1\nI0512 11:16:32.936478 2408 log.go:172] (0xc0006f05a0) (1) Data frame handling\nI0512 11:16:32.936496 2408 log.go:172] (0xc0006f05a0) (1) Data frame sent\nI0512 11:16:32.936509 2408 log.go:172] (0xc000c23550) (0xc0006f05a0) Stream removed, broadcasting: 1\nI0512 11:16:32.936520 2408 log.go:172] (0xc000c23550) Go away received\nI0512 11:16:32.936927 2408 log.go:172] (0xc000c23550) (0xc0006f05a0) Stream removed, broadcasting: 1\nI0512 11:16:32.936952 2408 log.go:172] (0xc000c23550) (0xc000223a40) Stream removed, broadcasting: 3\nI0512 11:16:32.936964 2408 log.go:172] (0xc000c23550) (0xc0006f0f00) Stream removed, broadcasting: 5\n" May 12 11:16:32.942: INFO: stdout: "affinity-clusterip-timeout-v6w6n" May 12 11:16:47.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-321 execpod-affinity9pjc4 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.99.148.218:80/' May 12 11:16:48.170: INFO: stderr: "I0512 11:16:48.083307 2428 log.go:172] (0xc000860000) (0xc0008e2780) Create stream\nI0512 11:16:48.083368 2428 log.go:172] (0xc000860000) (0xc0008e2780) Stream added, broadcasting: 1\nI0512 11:16:48.085043 2428 log.go:172] (0xc000860000) Reply frame received for 1\nI0512 11:16:48.085082 2428 log.go:172] (0xc000860000) (0xc0008d6be0) Create stream\nI0512 11:16:48.085095 2428 log.go:172] (0xc000860000) (0xc0008d6be0) Stream added, broadcasting: 3\nI0512 11:16:48.085872 2428 log.go:172] (0xc000860000) Reply frame received for 3\nI0512 11:16:48.085890 2428 log.go:172] (0xc000860000) (0xc0008add60) Create stream\nI0512 11:16:48.085897 2428 log.go:172] (0xc000860000) (0xc0008add60) Stream added, broadcasting: 5\nI0512 11:16:48.086479 2428 log.go:172] (0xc000860000) Reply frame received for 5\nI0512 11:16:48.159680 2428 log.go:172] (0xc000860000) Data frame received for 5\nI0512 11:16:48.159719 2428 log.go:172] (0xc0008add60) (5) Data frame handling\nI0512 11:16:48.159740 2428 log.go:172] (0xc0008add60) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:16:48.163042 2428 log.go:172] (0xc000860000) Data frame received for 3\nI0512 11:16:48.163065 2428 log.go:172] (0xc0008d6be0) (3) Data frame handling\nI0512 11:16:48.163090 2428 log.go:172] (0xc0008d6be0) (3) Data frame sent\nI0512 11:16:48.163715 2428 log.go:172] (0xc000860000) Data frame received for 3\nI0512 11:16:48.163739 2428 log.go:172] (0xc0008d6be0) (3) Data frame handling\nI0512 11:16:48.163840 2428 log.go:172] (0xc000860000) Data frame received for 5\nI0512 11:16:48.163859 2428 log.go:172] (0xc0008add60) (5) Data frame handling\nI0512 11:16:48.165879 2428 log.go:172] (0xc000860000) Data frame received for 1\nI0512 11:16:48.165904 2428 log.go:172] (0xc0008e2780) (1) Data frame handling\nI0512 11:16:48.165918 2428 log.go:172] (0xc0008e2780) (1) Data frame sent\nI0512 11:16:48.165929 2428 log.go:172] (0xc000860000) (0xc0008e2780) Stream removed, broadcasting: 1\nI0512 11:16:48.165941 2428 log.go:172] (0xc000860000) Go away received\nI0512 11:16:48.166386 2428 log.go:172] (0xc000860000) (0xc0008e2780) Stream removed, broadcasting: 1\nI0512 11:16:48.166445 2428 log.go:172] (0xc000860000) (0xc0008d6be0) Stream removed, broadcasting: 3\nI0512 11:16:48.166467 2428 log.go:172] (0xc000860000) (0xc0008add60) Stream removed, broadcasting: 5\n" May 12 11:16:48.170: INFO: stdout: "affinity-clusterip-timeout-v6w6n" May 12 11:17:03.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-321 execpod-affinity9pjc4 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.99.148.218:80/' May 12 11:17:03.439: INFO: stderr: "I0512 11:17:03.321345 2447 log.go:172] (0xc000b57970) (0xc000836640) Create stream\nI0512 11:17:03.321579 2447 log.go:172] (0xc000b57970) (0xc000836640) Stream added, broadcasting: 1\nI0512 11:17:03.325273 2447 log.go:172] (0xc000b57970) Reply frame received for 1\nI0512 11:17:03.325333 2447 log.go:172] (0xc000b57970) (0xc0004f97c0) Create stream\nI0512 11:17:03.325355 2447 log.go:172] (0xc000b57970) (0xc0004f97c0) Stream added, broadcasting: 3\nI0512 11:17:03.326332 2447 log.go:172] (0xc000b57970) Reply frame received for 3\nI0512 11:17:03.326367 2447 log.go:172] (0xc000b57970) (0xc000836fa0) Create stream\nI0512 11:17:03.326386 2447 log.go:172] (0xc000b57970) (0xc000836fa0) Stream added, broadcasting: 5\nI0512 11:17:03.327203 2447 log.go:172] (0xc000b57970) Reply frame received for 5\nI0512 11:17:03.428710 2447 log.go:172] (0xc000b57970) Data frame received for 5\nI0512 11:17:03.428747 2447 log.go:172] (0xc000836fa0) (5) Data frame handling\nI0512 11:17:03.428777 2447 log.go:172] (0xc000836fa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:17:03.431544 2447 log.go:172] (0xc000b57970) Data frame received for 3\nI0512 11:17:03.431559 2447 log.go:172] (0xc0004f97c0) (3) Data frame handling\nI0512 11:17:03.431567 2447 log.go:172] (0xc0004f97c0) (3) Data frame sent\nI0512 11:17:03.432424 2447 log.go:172] (0xc000b57970) Data frame received for 5\nI0512 11:17:03.432480 2447 log.go:172] (0xc000836fa0) (5) Data frame handling\nI0512 11:17:03.432521 2447 log.go:172] (0xc000b57970) Data frame received for 3\nI0512 11:17:03.432554 2447 log.go:172] (0xc0004f97c0) (3) Data frame handling\nI0512 11:17:03.434019 2447 log.go:172] (0xc000b57970) Data frame received for 1\nI0512 11:17:03.434048 2447 log.go:172] (0xc000836640) (1) Data frame handling\nI0512 11:17:03.434061 2447 log.go:172] (0xc000836640) (1) Data frame sent\nI0512 11:17:03.434071 2447 log.go:172] (0xc000b57970) (0xc000836640) Stream removed, broadcasting: 1\nI0512 11:17:03.434085 2447 log.go:172] (0xc000b57970) Go away received\nI0512 11:17:03.434424 2447 log.go:172] (0xc000b57970) (0xc000836640) Stream removed, broadcasting: 1\nI0512 11:17:03.434446 2447 log.go:172] (0xc000b57970) (0xc0004f97c0) Stream removed, broadcasting: 3\nI0512 11:17:03.434454 2447 log.go:172] (0xc000b57970) (0xc000836fa0) Stream removed, broadcasting: 5\n" May 12 11:17:03.439: INFO: stdout: "affinity-clusterip-timeout-v6w6n" May 12 11:17:18.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-321 execpod-affinity9pjc4 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.99.148.218:80/' May 12 11:17:18.967: INFO: stderr: "I0512 11:17:18.879872 2467 log.go:172] (0xc0009449a0) (0xc0006d66e0) Create stream\nI0512 11:17:18.879943 2467 log.go:172] (0xc0009449a0) (0xc0006d66e0) Stream added, broadcasting: 1\nI0512 11:17:18.886761 2467 log.go:172] (0xc0009449a0) Reply frame received for 1\nI0512 11:17:18.886796 2467 log.go:172] (0xc0009449a0) (0xc0006f6000) Create stream\nI0512 11:17:18.886815 2467 log.go:172] (0xc0009449a0) (0xc0006f6000) Stream added, broadcasting: 3\nI0512 11:17:18.887809 2467 log.go:172] (0xc0009449a0) Reply frame received for 3\nI0512 11:17:18.887860 2467 log.go:172] (0xc0009449a0) (0xc000483ea0) Create stream\nI0512 11:17:18.887879 2467 log.go:172] (0xc0009449a0) (0xc000483ea0) Stream added, broadcasting: 5\nI0512 11:17:18.888726 2467 log.go:172] (0xc0009449a0) Reply frame received for 5\nI0512 11:17:18.958353 2467 log.go:172] (0xc0009449a0) Data frame received for 5\nI0512 11:17:18.958373 2467 log.go:172] (0xc000483ea0) (5) Data frame handling\nI0512 11:17:18.958385 2467 log.go:172] (0xc000483ea0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:17:18.960657 2467 log.go:172] (0xc0009449a0) Data frame received for 3\nI0512 11:17:18.960677 2467 log.go:172] (0xc0006f6000) (3) Data frame handling\nI0512 11:17:18.960701 2467 log.go:172] (0xc0006f6000) (3) Data frame sent\nI0512 11:17:18.961344 2467 log.go:172] (0xc0009449a0) Data frame received for 5\nI0512 11:17:18.961364 2467 log.go:172] (0xc000483ea0) (5) Data frame handling\nI0512 11:17:18.961515 2467 log.go:172] (0xc0009449a0) Data frame received for 3\nI0512 11:17:18.961529 2467 log.go:172] (0xc0006f6000) (3) Data frame handling\nI0512 11:17:18.962916 2467 log.go:172] (0xc0009449a0) Data frame received for 1\nI0512 11:17:18.962933 2467 log.go:172] (0xc0006d66e0) (1) Data frame handling\nI0512 11:17:18.962946 2467 log.go:172] (0xc0006d66e0) (1) Data frame sent\nI0512 11:17:18.962962 2467 log.go:172] (0xc0009449a0) (0xc0006d66e0) Stream removed, broadcasting: 1\nI0512 11:17:18.962980 2467 log.go:172] (0xc0009449a0) Go away received\nI0512 11:17:18.963372 2467 log.go:172] (0xc0009449a0) (0xc0006d66e0) Stream removed, broadcasting: 1\nI0512 11:17:18.963394 2467 log.go:172] (0xc0009449a0) (0xc0006f6000) Stream removed, broadcasting: 3\nI0512 11:17:18.963402 2467 log.go:172] (0xc0009449a0) (0xc000483ea0) Stream removed, broadcasting: 5\n" May 12 11:17:18.967: INFO: stdout: "affinity-clusterip-timeout-v6w6n" May 12 11:17:33.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-321 execpod-affinity9pjc4 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.99.148.218:80/' May 12 11:17:34.156: INFO: stderr: "I0512 11:17:34.082887 2487 log.go:172] (0xc00096d1e0) (0xc00064df40) Create stream\nI0512 11:17:34.082946 2487 log.go:172] (0xc00096d1e0) (0xc00064df40) Stream added, broadcasting: 1\nI0512 11:17:34.086953 2487 log.go:172] (0xc00096d1e0) Reply frame received for 1\nI0512 11:17:34.087002 2487 log.go:172] (0xc00096d1e0) (0xc000606aa0) Create stream\nI0512 11:17:34.087019 2487 log.go:172] (0xc00096d1e0) (0xc000606aa0) Stream added, broadcasting: 3\nI0512 11:17:34.088003 2487 log.go:172] (0xc00096d1e0) Reply frame received for 3\nI0512 11:17:34.088052 2487 log.go:172] (0xc00096d1e0) (0xc0004c65a0) Create stream\nI0512 11:17:34.088086 2487 log.go:172] (0xc00096d1e0) (0xc0004c65a0) Stream added, broadcasting: 5\nI0512 11:17:34.088950 2487 log.go:172] (0xc00096d1e0) Reply frame received for 5\nI0512 11:17:34.148535 2487 log.go:172] (0xc00096d1e0) Data frame received for 5\nI0512 11:17:34.148563 2487 log.go:172] (0xc0004c65a0) (5) Data frame handling\nI0512 11:17:34.148581 2487 log.go:172] (0xc0004c65a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.99.148.218:80/\nI0512 11:17:34.151739 2487 log.go:172] (0xc00096d1e0) Data frame received for 3\nI0512 11:17:34.151749 2487 log.go:172] (0xc000606aa0) (3) Data frame handling\nI0512 11:17:34.151763 2487 log.go:172] (0xc000606aa0) (3) Data frame sent\nI0512 11:17:34.152234 2487 log.go:172] (0xc00096d1e0) Data frame received for 5\nI0512 11:17:34.152247 2487 log.go:172] (0xc0004c65a0) (5) Data frame handling\nI0512 11:17:34.152283 2487 log.go:172] (0xc00096d1e0) Data frame received for 3\nI0512 11:17:34.152292 2487 log.go:172] (0xc000606aa0) (3) Data frame handling\nI0512 11:17:34.154050 2487 log.go:172] (0xc00096d1e0) Data frame received for 1\nI0512 11:17:34.154064 2487 log.go:172] (0xc00064df40) (1) Data frame handling\nI0512 11:17:34.154072 2487 log.go:172] (0xc00064df40) (1) Data frame sent\nI0512 11:17:34.154082 2487 log.go:172] (0xc00096d1e0) (0xc00064df40) Stream removed, broadcasting: 1\nI0512 11:17:34.154187 2487 log.go:172] (0xc00096d1e0) Go away received\nI0512 11:17:34.154376 2487 log.go:172] (0xc00096d1e0) (0xc00064df40) Stream removed, broadcasting: 1\nI0512 11:17:34.154393 2487 log.go:172] (0xc00096d1e0) (0xc000606aa0) Stream removed, broadcasting: 3\nI0512 11:17:34.154403 2487 log.go:172] (0xc00096d1e0) (0xc0004c65a0) Stream removed, broadcasting: 5\n" May 12 11:17:34.157: INFO: stdout: "affinity-clusterip-timeout-2r6fg" May 12 11:17:34.157: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-321, will wait for the garbage collector to delete the pods May 12 11:17:34.267: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 24.940059ms May 12 11:17:34.767: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.176311ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:17:46.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-321" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:110.284 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":128,"skipped":2086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:17:46.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 11:17:46.566: INFO: Waiting up to 5m0s for pod "pod-57dac058-bc2b-4e5e-bcdd-cc7e39d676fd" in namespace "emptydir-2286" to be "Succeeded or Failed" May 12 11:17:46.579: INFO: Pod "pod-57dac058-bc2b-4e5e-bcdd-cc7e39d676fd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.643365ms May 12 11:17:48.834: INFO: Pod "pod-57dac058-bc2b-4e5e-bcdd-cc7e39d676fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267661839s May 12 11:17:50.838: INFO: Pod "pod-57dac058-bc2b-4e5e-bcdd-cc7e39d676fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271927074s May 12 11:17:52.846: INFO: Pod "pod-57dac058-bc2b-4e5e-bcdd-cc7e39d676fd": Phase="Running", Reason="", readiness=true. Elapsed: 6.279280908s May 12 11:17:54.960: INFO: Pod "pod-57dac058-bc2b-4e5e-bcdd-cc7e39d676fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.393638357s STEP: Saw pod success May 12 11:17:54.960: INFO: Pod "pod-57dac058-bc2b-4e5e-bcdd-cc7e39d676fd" satisfied condition "Succeeded or Failed" May 12 11:17:54.990: INFO: Trying to get logs from node latest-worker2 pod pod-57dac058-bc2b-4e5e-bcdd-cc7e39d676fd container test-container: STEP: delete the pod May 12 11:17:55.638: INFO: Waiting for pod pod-57dac058-bc2b-4e5e-bcdd-cc7e39d676fd to disappear May 12 11:17:55.744: INFO: Pod pod-57dac058-bc2b-4e5e-bcdd-cc7e39d676fd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:17:55.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2286" for this suite. • [SLOW TEST:10.420 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":129,"skipped":2137,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:17:56.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-117527a1-3ff2-4590-be6d-0c16bcb2c93d in namespace container-probe-2553 May 12 11:18:04.828: INFO: Started pod liveness-117527a1-3ff2-4590-be6d-0c16bcb2c93d in namespace container-probe-2553 STEP: checking the pod's current state and verifying that restartCount is present May 12 11:18:04.831: INFO: Initial restart count of pod liveness-117527a1-3ff2-4590-be6d-0c16bcb2c93d is 0 May 12 11:18:25.988: INFO: Restart count of pod container-probe-2553/liveness-117527a1-3ff2-4590-be6d-0c16bcb2c93d is now 1 (21.156960733s elapsed) May 12 11:18:46.244: INFO: Restart count of pod container-probe-2553/liveness-117527a1-3ff2-4590-be6d-0c16bcb2c93d is now 2 (41.413374169s elapsed) May 12 11:19:10.617: INFO: Restart count of pod container-probe-2553/liveness-117527a1-3ff2-4590-be6d-0c16bcb2c93d is now 3 (1m5.786506251s elapsed) May 12 11:19:30.191: INFO: Restart count of pod container-probe-2553/liveness-117527a1-3ff2-4590-be6d-0c16bcb2c93d is now 4 (1m25.360510278s elapsed) May 12 11:20:33.597: INFO: Restart count of pod container-probe-2553/liveness-117527a1-3ff2-4590-be6d-0c16bcb2c93d is now 5 (2m28.765893503s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:20:33.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2553" for this suite. • [SLOW TEST:157.047 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":130,"skipped":2142,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:20:33.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 11:20:49.214: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 11:20:49.273: INFO: Pod pod-with-prestop-http-hook still exists May 12 11:20:51.273: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 11:20:51.310: INFO: Pod pod-with-prestop-http-hook still exists May 12 11:20:53.273: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 11:20:53.297: INFO: Pod pod-with-prestop-http-hook still exists May 12 11:20:55.273: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 11:20:55.285: INFO: Pod pod-with-prestop-http-hook still exists May 12 11:20:57.273: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 11:20:57.488: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:20:57.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6277" for this suite. • [SLOW TEST:23.918 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":131,"skipped":2144,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:20:57.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 11:21:01.878: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 11:21:04.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879261, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879261, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879262, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879261, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:21:06.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879261, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879261, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879262, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879261, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 11:21:09.727: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:21:10.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2412" for this suite. STEP: Destroying namespace "webhook-2412-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.454 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":132,"skipped":2162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:21:11.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 11:21:12.322: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26e37f31-89ff-4fec-aa8a-92c8cfbfcf69" in namespace "projected-7786" to be "Succeeded or Failed" May 12 11:21:12.525: INFO: Pod "downwardapi-volume-26e37f31-89ff-4fec-aa8a-92c8cfbfcf69": Phase="Pending", Reason="", readiness=false. Elapsed: 202.056277ms May 12 11:21:15.016: INFO: Pod "downwardapi-volume-26e37f31-89ff-4fec-aa8a-92c8cfbfcf69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.693094105s May 12 11:21:17.033: INFO: Pod "downwardapi-volume-26e37f31-89ff-4fec-aa8a-92c8cfbfcf69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.710512876s May 12 11:21:19.225: INFO: Pod "downwardapi-volume-26e37f31-89ff-4fec-aa8a-92c8cfbfcf69": Phase="Pending", Reason="", readiness=false. Elapsed: 6.902836904s May 12 11:21:21.502: INFO: Pod "downwardapi-volume-26e37f31-89ff-4fec-aa8a-92c8cfbfcf69": Phase="Running", Reason="", readiness=true. Elapsed: 9.17953715s May 12 11:21:23.506: INFO: Pod "downwardapi-volume-26e37f31-89ff-4fec-aa8a-92c8cfbfcf69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.183190457s STEP: Saw pod success May 12 11:21:23.506: INFO: Pod "downwardapi-volume-26e37f31-89ff-4fec-aa8a-92c8cfbfcf69" satisfied condition "Succeeded or Failed" May 12 11:21:23.508: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-26e37f31-89ff-4fec-aa8a-92c8cfbfcf69 container client-container: STEP: delete the pod May 12 11:21:24.129: INFO: Waiting for pod downwardapi-volume-26e37f31-89ff-4fec-aa8a-92c8cfbfcf69 to disappear May 12 11:21:24.158: INFO: Pod downwardapi-volume-26e37f31-89ff-4fec-aa8a-92c8cfbfcf69 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:21:24.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7786" for this suite. • [SLOW TEST:13.030 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":133,"skipped":2199,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:21:24.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 12 11:21:24.311: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:21:37.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4756" for this suite. • [SLOW TEST:13.769 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":134,"skipped":2199,"failed":0} [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:21:37.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 11:21:45.222: INFO: Successfully updated pod "pod-update-activedeadlineseconds-01b7a099-2e83-4e2f-9964-b02ed5b6865a" May 12 11:21:45.222: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-01b7a099-2e83-4e2f-9964-b02ed5b6865a" in namespace "pods-7702" to be "terminated due to deadline exceeded" May 12 11:21:45.275: INFO: Pod "pod-update-activedeadlineseconds-01b7a099-2e83-4e2f-9964-b02ed5b6865a": Phase="Running", Reason="", readiness=true. Elapsed: 52.605147ms May 12 11:21:47.279: INFO: Pod "pod-update-activedeadlineseconds-01b7a099-2e83-4e2f-9964-b02ed5b6865a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.056710404s May 12 11:21:47.279: INFO: Pod "pod-update-activedeadlineseconds-01b7a099-2e83-4e2f-9964-b02ed5b6865a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:21:47.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7702" for this suite. • [SLOW TEST:9.340 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:21:47.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 11:21:48.031: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 11:21:50.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879308, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879308, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879308, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879307, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:21:52.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879308, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879308, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879308, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879307, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 11:21:55.722: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 11:21:55.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2200-crds.webhook.example.com via the AdmissionRegistration API May 12 11:21:56.502: INFO: Waiting for webhook configuration to be ready... STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:21:57.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7482" for this suite. STEP: Destroying namespace "webhook-7482-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.055 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":136,"skipped":2235,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:21:58.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 12 11:21:59.389: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:22:17.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2362" for this suite. • [SLOW TEST:19.046 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":137,"skipped":2247,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:22:17.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-1676664d-ba41-4191-a0cd-6704acf48672 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:22:18.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5038" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":138,"skipped":2252,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:22:18.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-4556/configmap-test-122c032b-b4d0-4c11-96ed-30413d2feac8 STEP: Creating a pod to test consume configMaps May 12 11:22:18.390: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec789410-a623-4ce1-b7ca-55510368e1d1" in namespace "configmap-4556" to be "Succeeded or Failed" May 12 11:22:18.456: INFO: Pod "pod-configmaps-ec789410-a623-4ce1-b7ca-55510368e1d1": Phase="Pending", Reason="", readiness=false. Elapsed: 66.548603ms May 12 11:22:20.485: INFO: Pod "pod-configmaps-ec789410-a623-4ce1-b7ca-55510368e1d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095750506s May 12 11:22:22.818: INFO: Pod "pod-configmaps-ec789410-a623-4ce1-b7ca-55510368e1d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.428663752s May 12 11:22:24.884: INFO: Pod "pod-configmaps-ec789410-a623-4ce1-b7ca-55510368e1d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.494219572s May 12 11:22:27.597: INFO: Pod "pod-configmaps-ec789410-a623-4ce1-b7ca-55510368e1d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.207478781s STEP: Saw pod success May 12 11:22:27.597: INFO: Pod "pod-configmaps-ec789410-a623-4ce1-b7ca-55510368e1d1" satisfied condition "Succeeded or Failed" May 12 11:22:27.602: INFO: Trying to get logs from node latest-worker pod pod-configmaps-ec789410-a623-4ce1-b7ca-55510368e1d1 container env-test: STEP: delete the pod May 12 11:22:27.695: INFO: Waiting for pod pod-configmaps-ec789410-a623-4ce1-b7ca-55510368e1d1 to disappear May 12 11:22:27.922: INFO: Pod pod-configmaps-ec789410-a623-4ce1-b7ca-55510368e1d1 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:22:27.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4556" for this suite. • [SLOW TEST:10.734 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":139,"skipped":2264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:22:28.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 11:22:30.504: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3fdc63f-1221-4ca7-aca7-ea573810c4a0" in namespace "downward-api-1722" to be "Succeeded or Failed" May 12 11:22:30.632: INFO: Pod "downwardapi-volume-b3fdc63f-1221-4ca7-aca7-ea573810c4a0": Phase="Pending", Reason="", readiness=false. Elapsed: 128.101715ms May 12 11:22:32.951: INFO: Pod "downwardapi-volume-b3fdc63f-1221-4ca7-aca7-ea573810c4a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447009723s May 12 11:22:35.328: INFO: Pod "downwardapi-volume-b3fdc63f-1221-4ca7-aca7-ea573810c4a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.823722054s May 12 11:22:37.453: INFO: Pod "downwardapi-volume-b3fdc63f-1221-4ca7-aca7-ea573810c4a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.949048708s STEP: Saw pod success May 12 11:22:37.453: INFO: Pod "downwardapi-volume-b3fdc63f-1221-4ca7-aca7-ea573810c4a0" satisfied condition "Succeeded or Failed" May 12 11:22:37.455: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b3fdc63f-1221-4ca7-aca7-ea573810c4a0 container client-container: STEP: delete the pod May 12 11:22:37.671: INFO: Waiting for pod downwardapi-volume-b3fdc63f-1221-4ca7-aca7-ea573810c4a0 to disappear May 12 11:22:37.676: INFO: Pod downwardapi-volume-b3fdc63f-1221-4ca7-aca7-ea573810c4a0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:22:37.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1722" for this suite. • [SLOW TEST:8.861 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":140,"skipped":2317,"failed":0} [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:22:37.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-fbd5aef6-3c47-4a91-922e-37aab16144e3 in namespace container-probe-8064 May 12 11:22:45.742: INFO: Started pod busybox-fbd5aef6-3c47-4a91-922e-37aab16144e3 in namespace container-probe-8064 STEP: checking the pod's current state and verifying that restartCount is present May 12 11:22:45.744: INFO: Initial restart count of pod busybox-fbd5aef6-3c47-4a91-922e-37aab16144e3 is 0 May 12 11:23:39.326: INFO: Restart count of pod container-probe-8064/busybox-fbd5aef6-3c47-4a91-922e-37aab16144e3 is now 1 (53.582105131s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:23:39.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8064" for this suite. • [SLOW TEST:62.244 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":141,"skipped":2317,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:23:39.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 12 11:23:41.160: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 12 11:23:44.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879421, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879421, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879421, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879421, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:23:46.203: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879421, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879421, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879421, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879421, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:23:48.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879421, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879421, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879421, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724879421, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 11:23:51.220: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 11:23:51.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:23:53.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3833" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:15.920 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":142,"skipped":2345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:23:55.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 11:23:56.421: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ecb3bc0-caa9-4e7e-aec2-88b451051f39" in namespace "projected-9685" to be "Succeeded or Failed" May 12 11:23:56.595: INFO: Pod "downwardapi-volume-0ecb3bc0-caa9-4e7e-aec2-88b451051f39": Phase="Pending", Reason="", readiness=false. Elapsed: 174.062351ms May 12 11:23:58.681: INFO: Pod "downwardapi-volume-0ecb3bc0-caa9-4e7e-aec2-88b451051f39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260247405s May 12 11:24:01.035: INFO: Pod "downwardapi-volume-0ecb3bc0-caa9-4e7e-aec2-88b451051f39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.61407777s May 12 11:24:03.736: INFO: Pod "downwardapi-volume-0ecb3bc0-caa9-4e7e-aec2-88b451051f39": Phase="Pending", Reason="", readiness=false. Elapsed: 7.314472882s May 12 11:24:05.963: INFO: Pod "downwardapi-volume-0ecb3bc0-caa9-4e7e-aec2-88b451051f39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.541897494s STEP: Saw pod success May 12 11:24:05.963: INFO: Pod "downwardapi-volume-0ecb3bc0-caa9-4e7e-aec2-88b451051f39" satisfied condition "Succeeded or Failed" May 12 11:24:05.966: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0ecb3bc0-caa9-4e7e-aec2-88b451051f39 container client-container: STEP: delete the pod May 12 11:24:06.671: INFO: Waiting for pod downwardapi-volume-0ecb3bc0-caa9-4e7e-aec2-88b451051f39 to disappear May 12 11:24:07.124: INFO: Pod downwardapi-volume-0ecb3bc0-caa9-4e7e-aec2-88b451051f39 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:24:07.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9685" for this suite. • [SLOW TEST:11.676 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":143,"skipped":2377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:24:07.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 12 11:24:16.260: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-2008 PodName:var-expansion-f3f13547-6475-4fd7-979a-9a92efd142ec ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:24:16.261: INFO: >>> kubeConfig: /root/.kube/config I0512 11:24:16.289582 7 log.go:172] (0xc002a72000) (0xc000322a00) Create stream I0512 11:24:16.289606 7 log.go:172] (0xc002a72000) (0xc000322a00) Stream added, broadcasting: 1 I0512 11:24:16.291261 7 log.go:172] (0xc002a72000) Reply frame received for 1 I0512 11:24:16.291300 7 log.go:172] (0xc002a72000) (0xc00131ac80) Create stream I0512 11:24:16.291319 7 log.go:172] (0xc002a72000) (0xc00131ac80) Stream added, broadcasting: 3 I0512 11:24:16.292183 7 log.go:172] (0xc002a72000) Reply frame received for 3 I0512 11:24:16.292201 7 log.go:172] (0xc002a72000) (0xc000323220) Create stream I0512 11:24:16.292213 7 log.go:172] (0xc002a72000) (0xc000323220) Stream added, broadcasting: 5 I0512 11:24:16.293038 7 log.go:172] (0xc002a72000) Reply frame received for 5 I0512 11:24:16.371627 7 log.go:172] (0xc002a72000) Data frame received for 5 I0512 11:24:16.371656 7 log.go:172] (0xc000323220) (5) Data frame handling I0512 11:24:16.371731 7 log.go:172] (0xc002a72000) Data frame received for 3 I0512 11:24:16.371782 7 log.go:172] (0xc00131ac80) (3) Data frame handling I0512 11:24:16.372887 7 log.go:172] (0xc002a72000) Data frame received for 1 I0512 11:24:16.372916 7 log.go:172] (0xc000322a00) (1) Data frame handling I0512 11:24:16.372940 7 log.go:172] (0xc000322a00) (1) Data frame sent I0512 11:24:16.372963 7 log.go:172] (0xc002a72000) (0xc000322a00) Stream removed, broadcasting: 1 I0512 11:24:16.373079 7 log.go:172] (0xc002a72000) (0xc000322a00) Stream removed, broadcasting: 1 I0512 11:24:16.373354 7 log.go:172] (0xc002a72000) (0xc00131ac80) Stream removed, broadcasting: 3 I0512 11:24:16.373434 7 log.go:172] (0xc002a72000) (0xc000323220) Stream removed, broadcasting: 5 STEP: test for file in mounted path I0512 11:24:16.373503 7 log.go:172] (0xc002a72000) Go away received May 12 11:24:16.377: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-2008 PodName:var-expansion-f3f13547-6475-4fd7-979a-9a92efd142ec ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:24:16.377: INFO: >>> kubeConfig: /root/.kube/config I0512 11:24:16.400823 7 log.go:172] (0xc002a9a2c0) (0xc00131bf40) Create stream I0512 11:24:16.400847 7 log.go:172] (0xc002a9a2c0) (0xc00131bf40) Stream added, broadcasting: 1 I0512 11:24:16.402674 7 log.go:172] (0xc002a9a2c0) Reply frame received for 1 I0512 11:24:16.402725 7 log.go:172] (0xc002a9a2c0) (0xc000d37ae0) Create stream I0512 11:24:16.402742 7 log.go:172] (0xc002a9a2c0) (0xc000d37ae0) Stream added, broadcasting: 3 I0512 11:24:16.403587 7 log.go:172] (0xc002a9a2c0) Reply frame received for 3 I0512 11:24:16.403626 7 log.go:172] (0xc002a9a2c0) (0xc000d37ea0) Create stream I0512 11:24:16.403632 7 log.go:172] (0xc002a9a2c0) (0xc000d37ea0) Stream added, broadcasting: 5 I0512 11:24:16.404429 7 log.go:172] (0xc002a9a2c0) Reply frame received for 5 I0512 11:24:16.464006 7 log.go:172] (0xc002a9a2c0) Data frame received for 5 I0512 11:24:16.464037 7 log.go:172] (0xc000d37ea0) (5) Data frame handling I0512 11:24:16.464055 7 log.go:172] (0xc002a9a2c0) Data frame received for 3 I0512 11:24:16.464063 7 log.go:172] (0xc000d37ae0) (3) Data frame handling I0512 11:24:16.465309 7 log.go:172] (0xc002a9a2c0) Data frame received for 1 I0512 11:24:16.465335 7 log.go:172] (0xc00131bf40) (1) Data frame handling I0512 11:24:16.465349 7 log.go:172] (0xc00131bf40) (1) Data frame sent I0512 11:24:16.465361 7 log.go:172] (0xc002a9a2c0) (0xc00131bf40) Stream removed, broadcasting: 1 I0512 11:24:16.465378 7 log.go:172] (0xc002a9a2c0) Go away received I0512 11:24:16.465522 7 log.go:172] (0xc002a9a2c0) (0xc00131bf40) Stream removed, broadcasting: 1 I0512 11:24:16.465542 7 log.go:172] (0xc002a9a2c0) (0xc000d37ae0) Stream removed, broadcasting: 3 I0512 11:24:16.465559 7 log.go:172] (0xc002a9a2c0) (0xc000d37ea0) Stream removed, broadcasting: 5 STEP: updating the annotation value May 12 11:24:17.040: INFO: Successfully updated pod "var-expansion-f3f13547-6475-4fd7-979a-9a92efd142ec" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 12 11:24:17.311: INFO: Deleting pod "var-expansion-f3f13547-6475-4fd7-979a-9a92efd142ec" in namespace "var-expansion-2008" May 12 11:24:17.316: INFO: Wait up to 5m0s for pod "var-expansion-f3f13547-6475-4fd7-979a-9a92efd142ec" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:24:58.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2008" for this suite. • [SLOW TEST:50.674 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":144,"skipped":2464,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:24:58.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:24:59.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7151" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":145,"skipped":2485,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:24:59.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:25:05.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6419" for this suite. • [SLOW TEST:6.460 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":146,"skipped":2493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:25:05.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 11:25:06.043: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a3842dd-27bc-48ed-8624-835dfd9ed724" in namespace "downward-api-4501" to be "Succeeded or Failed" May 12 11:25:06.056: INFO: Pod "downwardapi-volume-8a3842dd-27bc-48ed-8624-835dfd9ed724": Phase="Pending", Reason="", readiness=false. Elapsed: 13.001233ms May 12 11:25:08.060: INFO: Pod "downwardapi-volume-8a3842dd-27bc-48ed-8624-835dfd9ed724": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017111489s May 12 11:25:10.064: INFO: Pod "downwardapi-volume-8a3842dd-27bc-48ed-8624-835dfd9ed724": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021045087s May 12 11:25:12.083: INFO: Pod "downwardapi-volume-8a3842dd-27bc-48ed-8624-835dfd9ed724": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040053366s STEP: Saw pod success May 12 11:25:12.083: INFO: Pod "downwardapi-volume-8a3842dd-27bc-48ed-8624-835dfd9ed724" satisfied condition "Succeeded or Failed" May 12 11:25:12.086: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8a3842dd-27bc-48ed-8624-835dfd9ed724 container client-container: STEP: delete the pod May 12 11:25:12.235: INFO: Waiting for pod downwardapi-volume-8a3842dd-27bc-48ed-8624-835dfd9ed724 to disappear May 12 11:25:12.244: INFO: Pod downwardapi-volume-8a3842dd-27bc-48ed-8624-835dfd9ed724 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:25:12.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4501" for this suite. • [SLOW TEST:6.360 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":147,"skipped":2555,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:25:12.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-c08ba21c-cd62-4742-9399-52ce6f3d374c STEP: Creating a pod to test consume secrets May 12 11:25:12.838: INFO: Waiting up to 5m0s for pod "pod-secrets-b09abe69-14c6-4392-ac81-848e321bf317" in namespace "secrets-7034" to be "Succeeded or Failed" May 12 11:25:12.892: INFO: Pod "pod-secrets-b09abe69-14c6-4392-ac81-848e321bf317": Phase="Pending", Reason="", readiness=false. Elapsed: 54.515495ms May 12 11:25:15.564: INFO: Pod "pod-secrets-b09abe69-14c6-4392-ac81-848e321bf317": Phase="Pending", Reason="", readiness=false. Elapsed: 2.725669991s May 12 11:25:17.916: INFO: Pod "pod-secrets-b09abe69-14c6-4392-ac81-848e321bf317": Phase="Pending", Reason="", readiness=false. Elapsed: 5.078028477s May 12 11:25:20.346: INFO: Pod "pod-secrets-b09abe69-14c6-4392-ac81-848e321bf317": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.508167336s STEP: Saw pod success May 12 11:25:20.346: INFO: Pod "pod-secrets-b09abe69-14c6-4392-ac81-848e321bf317" satisfied condition "Succeeded or Failed" May 12 11:25:20.444: INFO: Trying to get logs from node latest-worker pod pod-secrets-b09abe69-14c6-4392-ac81-848e321bf317 container secret-env-test: STEP: delete the pod May 12 11:25:21.057: INFO: Waiting for pod pod-secrets-b09abe69-14c6-4392-ac81-848e321bf317 to disappear May 12 11:25:21.610: INFO: Pod pod-secrets-b09abe69-14c6-4392-ac81-848e321bf317 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:25:21.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7034" for this suite. • [SLOW TEST:9.375 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":148,"skipped":2572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:25:21.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-cff3b2d9-bee2-40de-aa0d-5640bb583e3e STEP: Creating a pod to test consume secrets May 12 11:25:22.532: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-45e78bde-17f3-4a61-bd78-99c3c87fecfc" in namespace "projected-6485" to be "Succeeded or Failed" May 12 11:25:22.690: INFO: Pod "pod-projected-secrets-45e78bde-17f3-4a61-bd78-99c3c87fecfc": Phase="Pending", Reason="", readiness=false. Elapsed: 158.381068ms May 12 11:25:24.803: INFO: Pod "pod-projected-secrets-45e78bde-17f3-4a61-bd78-99c3c87fecfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271228769s May 12 11:25:26.813: INFO: Pod "pod-projected-secrets-45e78bde-17f3-4a61-bd78-99c3c87fecfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.281306576s May 12 11:25:28.826: INFO: Pod "pod-projected-secrets-45e78bde-17f3-4a61-bd78-99c3c87fecfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.293398074s STEP: Saw pod success May 12 11:25:28.826: INFO: Pod "pod-projected-secrets-45e78bde-17f3-4a61-bd78-99c3c87fecfc" satisfied condition "Succeeded or Failed" May 12 11:25:28.828: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-45e78bde-17f3-4a61-bd78-99c3c87fecfc container projected-secret-volume-test: STEP: delete the pod May 12 11:25:28.864: INFO: Waiting for pod pod-projected-secrets-45e78bde-17f3-4a61-bd78-99c3c87fecfc to disappear May 12 11:25:28.874: INFO: Pod pod-projected-secrets-45e78bde-17f3-4a61-bd78-99c3c87fecfc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:25:28.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6485" for this suite. • [SLOW TEST:7.250 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":149,"skipped":2638,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:25:28.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-909fc1e6-4e54-487c-bfbc-fc658d9024ca STEP: Creating a pod to test consume configMaps May 12 11:25:29.451: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7ea17011-0db7-4bd8-b118-655aae902e4e" in namespace "projected-6477" to be "Succeeded or Failed" May 12 11:25:29.479: INFO: Pod "pod-projected-configmaps-7ea17011-0db7-4bd8-b118-655aae902e4e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.408721ms May 12 11:25:31.521: INFO: Pod "pod-projected-configmaps-7ea17011-0db7-4bd8-b118-655aae902e4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070171379s May 12 11:25:33.525: INFO: Pod "pod-projected-configmaps-7ea17011-0db7-4bd8-b118-655aae902e4e": Phase="Running", Reason="", readiness=true. Elapsed: 4.07418863s May 12 11:25:35.532: INFO: Pod "pod-projected-configmaps-7ea17011-0db7-4bd8-b118-655aae902e4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081603322s STEP: Saw pod success May 12 11:25:35.532: INFO: Pod "pod-projected-configmaps-7ea17011-0db7-4bd8-b118-655aae902e4e" satisfied condition "Succeeded or Failed" May 12 11:25:35.535: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-7ea17011-0db7-4bd8-b118-655aae902e4e container projected-configmap-volume-test: STEP: delete the pod May 12 11:25:35.608: INFO: Waiting for pod pod-projected-configmaps-7ea17011-0db7-4bd8-b118-655aae902e4e to disappear May 12 11:25:35.616: INFO: Pod pod-projected-configmaps-7ea17011-0db7-4bd8-b118-655aae902e4e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:25:35.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6477" for this suite. • [SLOW TEST:6.815 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":150,"skipped":2644,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:25:35.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 11:25:41.154: INFO: Successfully updated pod "pod-update-3b0a4fb6-471e-4b0a-a00e-b457a159aa26" STEP: verifying the updated pod is in kubernetes May 12 11:25:41.203: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:25:41.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-250" for this suite. • [SLOW TEST:5.515 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":151,"skipped":2651,"failed":0} [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:25:41.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7866 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7866 STEP: creating replication controller externalsvc in namespace services-7866 I0512 11:25:42.050667 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7866, replica count: 2 I0512 11:25:45.101003 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 11:25:48.101435 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 12 11:25:48.338: INFO: Creating new exec pod May 12 11:25:52.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7866 execpod5pdhm -- /bin/sh -x -c nslookup nodeport-service' May 12 11:25:56.114: INFO: stderr: "I0512 11:25:56.013280 2506 log.go:172] (0xc000ce60b0) (0xc000653400) Create stream\nI0512 11:25:56.013320 2506 log.go:172] (0xc000ce60b0) (0xc000653400) Stream added, broadcasting: 1\nI0512 11:25:56.015477 2506 log.go:172] (0xc000ce60b0) Reply frame received for 1\nI0512 11:25:56.015509 2506 log.go:172] (0xc000ce60b0) (0xc000653ae0) Create stream\nI0512 11:25:56.015516 2506 log.go:172] (0xc000ce60b0) (0xc000653ae0) Stream added, broadcasting: 3\nI0512 11:25:56.016459 2506 log.go:172] (0xc000ce60b0) Reply frame received for 3\nI0512 11:25:56.016494 2506 log.go:172] (0xc000ce60b0) (0xc000648be0) Create stream\nI0512 11:25:56.016508 2506 log.go:172] (0xc000ce60b0) (0xc000648be0) Stream added, broadcasting: 5\nI0512 11:25:56.017869 2506 log.go:172] (0xc000ce60b0) Reply frame received for 5\nI0512 11:25:56.099146 2506 log.go:172] (0xc000ce60b0) Data frame received for 5\nI0512 11:25:56.099183 2506 log.go:172] (0xc000648be0) (5) Data frame handling\nI0512 11:25:56.099212 2506 log.go:172] (0xc000648be0) (5) Data frame sent\n+ nslookup nodeport-service\nI0512 11:25:56.103856 2506 log.go:172] (0xc000ce60b0) Data frame received for 3\nI0512 11:25:56.103882 2506 log.go:172] (0xc000653ae0) (3) Data frame handling\nI0512 11:25:56.103899 2506 log.go:172] (0xc000653ae0) (3) Data frame sent\nI0512 11:25:56.105044 2506 log.go:172] (0xc000ce60b0) Data frame received for 3\nI0512 11:25:56.105056 2506 log.go:172] (0xc000653ae0) (3) Data frame handling\nI0512 11:25:56.105062 2506 log.go:172] (0xc000653ae0) (3) Data frame sent\nI0512 11:25:56.106208 2506 log.go:172] (0xc000ce60b0) Data frame received for 3\nI0512 11:25:56.106242 2506 log.go:172] (0xc000653ae0) (3) Data frame handling\nI0512 11:25:56.106323 2506 log.go:172] (0xc000ce60b0) Data frame received for 5\nI0512 11:25:56.106340 2506 log.go:172] (0xc000648be0) (5) Data frame handling\nI0512 11:25:56.108390 2506 log.go:172] (0xc000ce60b0) Data frame received for 1\nI0512 11:25:56.108425 2506 log.go:172] (0xc000653400) (1) Data frame handling\nI0512 11:25:56.108464 2506 log.go:172] (0xc000653400) (1) Data frame sent\nI0512 11:25:56.108502 2506 log.go:172] (0xc000ce60b0) (0xc000653400) Stream removed, broadcasting: 1\nI0512 11:25:56.108530 2506 log.go:172] (0xc000ce60b0) Go away received\nI0512 11:25:56.108804 2506 log.go:172] (0xc000ce60b0) (0xc000653400) Stream removed, broadcasting: 1\nI0512 11:25:56.108817 2506 log.go:172] (0xc000ce60b0) (0xc000653ae0) Stream removed, broadcasting: 3\nI0512 11:25:56.108823 2506 log.go:172] (0xc000ce60b0) (0xc000648be0) Stream removed, broadcasting: 5\n" May 12 11:25:56.114: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7866.svc.cluster.local\tcanonical name = externalsvc.services-7866.svc.cluster.local.\nName:\texternalsvc.services-7866.svc.cluster.local\nAddress: 10.108.206.235\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7866, will wait for the garbage collector to delete the pods May 12 11:25:56.174: INFO: Deleting ReplicationController externalsvc took: 6.640792ms May 12 11:25:56.474: INFO: Terminating ReplicationController externalsvc pods took: 300.243531ms May 12 11:26:05.327: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:26:05.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7866" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:24.155 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":152,"skipped":2651,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:26:05.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 12 11:26:05.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2667' May 12 11:26:05.830: INFO: stderr: "" May 12 11:26:05.830: INFO: stdout: "pod/pause created\n" May 12 11:26:05.830: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 12 11:26:05.830: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2667" to be "running and ready" May 12 11:26:05.860: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 30.451353ms May 12 11:26:07.863: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032790939s May 12 11:26:09.865: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.035547954s May 12 11:26:09.865: INFO: Pod "pause" satisfied condition "running and ready" May 12 11:26:09.865: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 12 11:26:09.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2667' May 12 11:26:09.957: INFO: stderr: "" May 12 11:26:09.957: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 12 11:26:09.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2667' May 12 11:26:10.059: INFO: stderr: "" May 12 11:26:10.059: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 12 11:26:10.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2667' May 12 11:26:10.267: INFO: stderr: "" May 12 11:26:10.267: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 12 11:26:10.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2667' May 12 11:26:10.384: INFO: stderr: "" May 12 11:26:10.384: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 12 11:26:10.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2667' May 12 11:26:10.588: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 11:26:10.588: INFO: stdout: "pod \"pause\" force deleted\n" May 12 11:26:10.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2667' May 12 11:26:11.085: INFO: stderr: "No resources found in kubectl-2667 namespace.\n" May 12 11:26:11.085: INFO: stdout: "" May 12 11:26:11.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2667 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 11:26:11.196: INFO: stderr: "" May 12 11:26:11.196: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:26:11.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2667" for this suite. • [SLOW TEST:5.841 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":153,"skipped":2654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:26:11.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 12 11:26:18.073: INFO: 10 pods remaining May 12 11:26:18.073: INFO: 10 pods has nil DeletionTimestamp May 12 11:26:18.073: INFO: May 12 11:26:20.968: INFO: 8 pods remaining May 12 11:26:20.968: INFO: 0 pods has nil DeletionTimestamp May 12 11:26:20.968: INFO: May 12 11:26:21.496: INFO: 0 pods remaining May 12 11:26:21.496: INFO: 0 pods has nil DeletionTimestamp May 12 11:26:21.496: INFO: STEP: Gathering metrics W0512 11:26:23.064597 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 11:26:23.064: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:26:23.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7571" for this suite. • [SLOW TEST:13.224 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":154,"skipped":2680,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:26:24.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 12 11:26:26.252: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-watch-closed dff9f4a5-98a9-49f6-babc-b80c1523d63e 3793926 0 2020-05-12 11:26:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-12 11:26:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 12 11:26:26.252: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-watch-closed dff9f4a5-98a9-49f6-babc-b80c1523d63e 3793927 0 2020-05-12 11:26:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-12 11:26:26 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 12 11:26:26.811: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-watch-closed dff9f4a5-98a9-49f6-babc-b80c1523d63e 3793930 0 2020-05-12 11:26:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-12 11:26:26 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 12 11:26:26.811: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-409 /api/v1/namespaces/watch-409/configmaps/e2e-watch-test-watch-closed dff9f4a5-98a9-49f6-babc-b80c1523d63e 3793932 0 2020-05-12 11:26:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-12 11:26:26 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:26:26.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-409" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":155,"skipped":2686,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:26:26.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-bc3e938e-9896-4626-b403-4e698dda36b9 STEP: Creating a pod to test consume configMaps May 12 11:26:28.624: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab85c92f-3f9e-4947-ad03-9476ffbd10f8" in namespace "projected-8369" to be "Succeeded or Failed" May 12 11:26:28.879: INFO: Pod "pod-projected-configmaps-ab85c92f-3f9e-4947-ad03-9476ffbd10f8": Phase="Pending", Reason="", readiness=false. Elapsed: 254.821373ms May 12 11:26:31.066: INFO: Pod "pod-projected-configmaps-ab85c92f-3f9e-4947-ad03-9476ffbd10f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.441558217s May 12 11:26:33.210: INFO: Pod "pod-projected-configmaps-ab85c92f-3f9e-4947-ad03-9476ffbd10f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.585862866s May 12 11:26:35.276: INFO: Pod "pod-projected-configmaps-ab85c92f-3f9e-4947-ad03-9476ffbd10f8": Phase="Running", Reason="", readiness=true. Elapsed: 6.652094138s May 12 11:26:37.311: INFO: Pod "pod-projected-configmaps-ab85c92f-3f9e-4947-ad03-9476ffbd10f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.687225729s STEP: Saw pod success May 12 11:26:37.311: INFO: Pod "pod-projected-configmaps-ab85c92f-3f9e-4947-ad03-9476ffbd10f8" satisfied condition "Succeeded or Failed" May 12 11:26:37.314: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-ab85c92f-3f9e-4947-ad03-9476ffbd10f8 container projected-configmap-volume-test: STEP: delete the pod May 12 11:26:37.499: INFO: Waiting for pod pod-projected-configmaps-ab85c92f-3f9e-4947-ad03-9476ffbd10f8 to disappear May 12 11:26:37.578: INFO: Pod pod-projected-configmaps-ab85c92f-3f9e-4947-ad03-9476ffbd10f8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:26:37.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8369" for this suite. • [SLOW TEST:10.765 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":156,"skipped":2710,"failed":0} SSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:26:37.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:26:37.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-1445" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":157,"skipped":2713,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:26:37.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:26:49.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1917" for this suite. • [SLOW TEST:11.662 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":158,"skipped":2790,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:26:49.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-0772dfc7-59fb-4f0f-b9db-553b8efca11a STEP: Creating a pod to test consume configMaps May 12 11:26:49.882: INFO: Waiting up to 5m0s for pod "pod-configmaps-d2ba2f5f-0f72-46e1-a7a9-8cd3cc2a7c7d" in namespace "configmap-4608" to be "Succeeded or Failed" May 12 11:26:49.929: INFO: Pod "pod-configmaps-d2ba2f5f-0f72-46e1-a7a9-8cd3cc2a7c7d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.549495ms May 12 11:26:51.951: INFO: Pod "pod-configmaps-d2ba2f5f-0f72-46e1-a7a9-8cd3cc2a7c7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069418606s May 12 11:26:54.030: INFO: Pod "pod-configmaps-d2ba2f5f-0f72-46e1-a7a9-8cd3cc2a7c7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148172469s May 12 11:26:56.157: INFO: Pod "pod-configmaps-d2ba2f5f-0f72-46e1-a7a9-8cd3cc2a7c7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.274788452s STEP: Saw pod success May 12 11:26:56.157: INFO: Pod "pod-configmaps-d2ba2f5f-0f72-46e1-a7a9-8cd3cc2a7c7d" satisfied condition "Succeeded or Failed" May 12 11:26:56.160: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d2ba2f5f-0f72-46e1-a7a9-8cd3cc2a7c7d container configmap-volume-test: STEP: delete the pod May 12 11:26:56.681: INFO: Waiting for pod pod-configmaps-d2ba2f5f-0f72-46e1-a7a9-8cd3cc2a7c7d to disappear May 12 11:26:56.898: INFO: Pod pod-configmaps-d2ba2f5f-0f72-46e1-a7a9-8cd3cc2a7c7d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:26:56.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4608" for this suite. • [SLOW TEST:7.568 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":159,"skipped":2808,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:26:57.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 12 11:26:57.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 12 11:26:57.641: INFO: stderr: "" May 12 11:26:57.641: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:26:57.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6935" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":160,"skipped":2823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:26:57.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-9de75481-db50-49a7-876f-15764c59b441 STEP: Creating a pod to test consume configMaps May 12 11:26:57.970: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f3994b23-0d1e-4337-88a6-648b0c1b68f0" in namespace "projected-2742" to be "Succeeded or Failed" May 12 11:26:57.977: INFO: Pod "pod-projected-configmaps-f3994b23-0d1e-4337-88a6-648b0c1b68f0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.740362ms May 12 11:26:59.981: INFO: Pod "pod-projected-configmaps-f3994b23-0d1e-4337-88a6-648b0c1b68f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011701191s May 12 11:27:02.061: INFO: Pod "pod-projected-configmaps-f3994b23-0d1e-4337-88a6-648b0c1b68f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091450209s May 12 11:27:04.066: INFO: Pod "pod-projected-configmaps-f3994b23-0d1e-4337-88a6-648b0c1b68f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095859387s STEP: Saw pod success May 12 11:27:04.066: INFO: Pod "pod-projected-configmaps-f3994b23-0d1e-4337-88a6-648b0c1b68f0" satisfied condition "Succeeded or Failed" May 12 11:27:04.068: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-f3994b23-0d1e-4337-88a6-648b0c1b68f0 container projected-configmap-volume-test: STEP: delete the pod May 12 11:27:04.100: INFO: Waiting for pod pod-projected-configmaps-f3994b23-0d1e-4337-88a6-648b0c1b68f0 to disappear May 12 11:27:04.112: INFO: Pod pod-projected-configmaps-f3994b23-0d1e-4337-88a6-648b0c1b68f0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:27:04.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2742" for this suite. • [SLOW TEST:6.471 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":161,"skipped":2870,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:27:04.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 12 11:27:04.204: INFO: Waiting up to 5m0s for pod "pod-7ce2ce72-a9d9-4e84-a8ec-b305f82f31a9" in namespace "emptydir-6233" to be "Succeeded or Failed" May 12 11:27:04.208: INFO: Pod "pod-7ce2ce72-a9d9-4e84-a8ec-b305f82f31a9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.47651ms May 12 11:27:06.212: INFO: Pod "pod-7ce2ce72-a9d9-4e84-a8ec-b305f82f31a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007218929s May 12 11:27:08.264: INFO: Pod "pod-7ce2ce72-a9d9-4e84-a8ec-b305f82f31a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059492039s STEP: Saw pod success May 12 11:27:08.264: INFO: Pod "pod-7ce2ce72-a9d9-4e84-a8ec-b305f82f31a9" satisfied condition "Succeeded or Failed" May 12 11:27:08.267: INFO: Trying to get logs from node latest-worker pod pod-7ce2ce72-a9d9-4e84-a8ec-b305f82f31a9 container test-container: STEP: delete the pod May 12 11:27:08.572: INFO: Waiting for pod pod-7ce2ce72-a9d9-4e84-a8ec-b305f82f31a9 to disappear May 12 11:27:08.616: INFO: Pod pod-7ce2ce72-a9d9-4e84-a8ec-b305f82f31a9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:27:08.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6233" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":162,"skipped":2907,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:27:09.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 11:27:09.972: INFO: Waiting up to 5m0s for pod "pod-ddcda01e-b615-4d98-b4f7-062f9287d921" in namespace "emptydir-374" to be "Succeeded or Failed" May 12 11:27:10.023: INFO: Pod "pod-ddcda01e-b615-4d98-b4f7-062f9287d921": Phase="Pending", Reason="", readiness=false. Elapsed: 50.451327ms May 12 11:27:12.042: INFO: Pod "pod-ddcda01e-b615-4d98-b4f7-062f9287d921": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069283371s May 12 11:27:14.063: INFO: Pod "pod-ddcda01e-b615-4d98-b4f7-062f9287d921": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090067581s STEP: Saw pod success May 12 11:27:14.063: INFO: Pod "pod-ddcda01e-b615-4d98-b4f7-062f9287d921" satisfied condition "Succeeded or Failed" May 12 11:27:14.095: INFO: Trying to get logs from node latest-worker pod pod-ddcda01e-b615-4d98-b4f7-062f9287d921 container test-container: STEP: delete the pod May 12 11:27:14.139: INFO: Waiting for pod pod-ddcda01e-b615-4d98-b4f7-062f9287d921 to disappear May 12 11:27:14.162: INFO: Pod pod-ddcda01e-b615-4d98-b4f7-062f9287d921 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:27:14.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-374" for this suite. • [SLOW TEST:5.151 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":163,"skipped":2910,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:27:14.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-lqkx STEP: Creating a pod to test atomic-volume-subpath May 12 11:27:14.260: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lqkx" in namespace "subpath-9525" to be "Succeeded or Failed" May 12 11:27:14.288: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Pending", Reason="", readiness=false. Elapsed: 28.144073ms May 12 11:27:16.480: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220299271s May 12 11:27:18.483: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Running", Reason="", readiness=true. Elapsed: 4.223580906s May 12 11:27:20.488: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Running", Reason="", readiness=true. Elapsed: 6.227670439s May 12 11:27:22.491: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Running", Reason="", readiness=true. Elapsed: 8.231361635s May 12 11:27:24.495: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Running", Reason="", readiness=true. Elapsed: 10.234934086s May 12 11:27:26.499: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Running", Reason="", readiness=true. Elapsed: 12.238710191s May 12 11:27:28.502: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Running", Reason="", readiness=true. Elapsed: 14.242142851s May 12 11:27:30.506: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Running", Reason="", readiness=true. Elapsed: 16.246567511s May 12 11:27:32.521: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Running", Reason="", readiness=true. Elapsed: 18.261348093s May 12 11:27:34.611: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Running", Reason="", readiness=true. Elapsed: 20.351573094s May 12 11:27:36.680: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Running", Reason="", readiness=true. Elapsed: 22.42020651s May 12 11:27:38.700: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Running", Reason="", readiness=true. Elapsed: 24.440408475s May 12 11:27:40.704: INFO: Pod "pod-subpath-test-secret-lqkx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.443839558s STEP: Saw pod success May 12 11:27:40.704: INFO: Pod "pod-subpath-test-secret-lqkx" satisfied condition "Succeeded or Failed" May 12 11:27:40.706: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-lqkx container test-container-subpath-secret-lqkx: STEP: delete the pod May 12 11:27:40.745: INFO: Waiting for pod pod-subpath-test-secret-lqkx to disappear May 12 11:27:40.820: INFO: Pod pod-subpath-test-secret-lqkx no longer exists STEP: Deleting pod pod-subpath-test-secret-lqkx May 12 11:27:40.820: INFO: Deleting pod "pod-subpath-test-secret-lqkx" in namespace "subpath-9525" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:27:40.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9525" for this suite. • [SLOW TEST:26.661 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":164,"skipped":2969,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:27:40.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4047 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-4047 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4047 May 12 11:27:41.001: INFO: Found 0 stateful pods, waiting for 1 May 12 11:27:51.005: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 12 11:27:51.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 11:27:51.312: INFO: stderr: "I0512 11:27:51.151986 2718 log.go:172] (0xc0000e0dc0) (0xc00014f7c0) Create stream\nI0512 11:27:51.152050 2718 log.go:172] (0xc0000e0dc0) (0xc00014f7c0) Stream added, broadcasting: 1\nI0512 11:27:51.154109 2718 log.go:172] (0xc0000e0dc0) Reply frame received for 1\nI0512 11:27:51.154137 2718 log.go:172] (0xc0000e0dc0) (0xc0000eaf00) Create stream\nI0512 11:27:51.154151 2718 log.go:172] (0xc0000e0dc0) (0xc0000eaf00) Stream added, broadcasting: 3\nI0512 11:27:51.154828 2718 log.go:172] (0xc0000e0dc0) Reply frame received for 3\nI0512 11:27:51.154851 2718 log.go:172] (0xc0000e0dc0) (0xc0004cd540) Create stream\nI0512 11:27:51.154858 2718 log.go:172] (0xc0000e0dc0) (0xc0004cd540) Stream added, broadcasting: 5\nI0512 11:27:51.155404 2718 log.go:172] (0xc0000e0dc0) Reply frame received for 5\nI0512 11:27:51.239218 2718 log.go:172] (0xc0000e0dc0) Data frame received for 5\nI0512 11:27:51.239270 2718 log.go:172] (0xc0004cd540) (5) Data frame handling\nI0512 11:27:51.239318 2718 log.go:172] (0xc0004cd540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 11:27:51.306293 2718 log.go:172] (0xc0000e0dc0) Data frame received for 3\nI0512 11:27:51.306340 2718 log.go:172] (0xc0000eaf00) (3) Data frame handling\nI0512 11:27:51.306362 2718 log.go:172] (0xc0000eaf00) (3) Data frame sent\nI0512 11:27:51.306381 2718 log.go:172] (0xc0000e0dc0) Data frame received for 5\nI0512 11:27:51.306405 2718 log.go:172] (0xc0004cd540) (5) Data frame handling\nI0512 11:27:51.306424 2718 log.go:172] (0xc0000e0dc0) Data frame received for 3\nI0512 11:27:51.306457 2718 log.go:172] (0xc0000eaf00) (3) Data frame handling\nI0512 11:27:51.308204 2718 log.go:172] (0xc0000e0dc0) Data frame received for 1\nI0512 11:27:51.308258 2718 log.go:172] (0xc00014f7c0) (1) Data frame handling\nI0512 11:27:51.308282 2718 log.go:172] (0xc00014f7c0) (1) Data frame sent\nI0512 11:27:51.308320 2718 log.go:172] (0xc0000e0dc0) (0xc00014f7c0) Stream removed, broadcasting: 1\nI0512 11:27:51.308828 2718 log.go:172] (0xc0000e0dc0) (0xc00014f7c0) Stream removed, broadcasting: 1\nI0512 11:27:51.308844 2718 log.go:172] (0xc0000e0dc0) (0xc0000eaf00) Stream removed, broadcasting: 3\nI0512 11:27:51.308962 2718 log.go:172] (0xc0000e0dc0) (0xc0004cd540) Stream removed, broadcasting: 5\n" May 12 11:27:51.312: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 11:27:51.312: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 11:27:51.329: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 11:28:01.333: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 11:28:01.333: INFO: Waiting for statefulset status.replicas updated to 0 May 12 11:28:01.390: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:28:01.390: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC }] May 12 11:28:01.390: INFO: May 12 11:28:01.390: INFO: StatefulSet ss has not reached scale 3, at 1 May 12 11:28:02.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.952907845s May 12 11:28:03.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.913952497s May 12 11:28:04.672: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.909460417s May 12 11:28:05.680: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.670531907s May 12 11:28:06.686: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.662527765s May 12 11:28:07.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.657137363s May 12 11:28:08.720: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.652894937s May 12 11:28:09.726: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.622412034s May 12 11:28:10.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 616.947117ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4047 May 12 11:28:11.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:28:14.454: INFO: stderr: "I0512 11:28:14.184418 2731 log.go:172] (0xc00099d3f0) (0xc000684aa0) Create stream\nI0512 11:28:14.184516 2731 log.go:172] (0xc00099d3f0) (0xc000684aa0) Stream added, broadcasting: 1\nI0512 11:28:14.189874 2731 log.go:172] (0xc00099d3f0) Reply frame received for 1\nI0512 11:28:14.189938 2731 log.go:172] (0xc00099d3f0) (0xc000626280) Create stream\nI0512 11:28:14.189970 2731 log.go:172] (0xc00099d3f0) (0xc000626280) Stream added, broadcasting: 3\nI0512 11:28:14.190947 2731 log.go:172] (0xc00099d3f0) Reply frame received for 3\nI0512 11:28:14.190984 2731 log.go:172] (0xc00099d3f0) (0xc00054e5a0) Create stream\nI0512 11:28:14.190996 2731 log.go:172] (0xc00099d3f0) (0xc00054e5a0) Stream added, broadcasting: 5\nI0512 11:28:14.191970 2731 log.go:172] (0xc00099d3f0) Reply frame received for 5\nI0512 11:28:14.287272 2731 log.go:172] (0xc00099d3f0) Data frame received for 5\nI0512 11:28:14.287298 2731 log.go:172] (0xc00054e5a0) (5) Data frame handling\nI0512 11:28:14.287314 2731 log.go:172] (0xc00054e5a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 11:28:14.446228 2731 log.go:172] (0xc00099d3f0) Data frame received for 3\nI0512 11:28:14.446291 2731 log.go:172] (0xc000626280) (3) Data frame handling\nI0512 11:28:14.446326 2731 log.go:172] (0xc000626280) (3) Data frame sent\nI0512 11:28:14.446458 2731 log.go:172] (0xc00099d3f0) Data frame received for 3\nI0512 11:28:14.446479 2731 log.go:172] (0xc000626280) (3) Data frame handling\nI0512 11:28:14.446513 2731 log.go:172] (0xc00099d3f0) Data frame received for 5\nI0512 11:28:14.446528 2731 log.go:172] (0xc00054e5a0) (5) Data frame handling\nI0512 11:28:14.448711 2731 log.go:172] (0xc00099d3f0) Data frame received for 1\nI0512 11:28:14.448740 2731 log.go:172] (0xc000684aa0) (1) Data frame handling\nI0512 11:28:14.448772 2731 log.go:172] (0xc000684aa0) (1) Data frame sent\nI0512 11:28:14.448795 2731 log.go:172] (0xc00099d3f0) (0xc000684aa0) Stream removed, broadcasting: 1\nI0512 11:28:14.448823 2731 log.go:172] (0xc00099d3f0) Go away received\nI0512 11:28:14.449397 2731 log.go:172] (0xc00099d3f0) (0xc000684aa0) Stream removed, broadcasting: 1\nI0512 11:28:14.449429 2731 log.go:172] (0xc00099d3f0) (0xc000626280) Stream removed, broadcasting: 3\nI0512 11:28:14.449446 2731 log.go:172] (0xc00099d3f0) (0xc00054e5a0) Stream removed, broadcasting: 5\n" May 12 11:28:14.454: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 11:28:14.454: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 11:28:14.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:28:15.144: INFO: stderr: "I0512 11:28:15.070136 2751 log.go:172] (0xc000a0a000) (0xc0008b4640) Create stream\nI0512 11:28:15.070180 2751 log.go:172] (0xc000a0a000) (0xc0008b4640) Stream added, broadcasting: 1\nI0512 11:28:15.071307 2751 log.go:172] (0xc000a0a000) Reply frame received for 1\nI0512 11:28:15.071340 2751 log.go:172] (0xc000a0a000) (0xc0008b4b40) Create stream\nI0512 11:28:15.071351 2751 log.go:172] (0xc000a0a000) (0xc0008b4b40) Stream added, broadcasting: 3\nI0512 11:28:15.072011 2751 log.go:172] (0xc000a0a000) Reply frame received for 3\nI0512 11:28:15.072039 2751 log.go:172] (0xc000a0a000) (0xc0008ac8c0) Create stream\nI0512 11:28:15.072047 2751 log.go:172] (0xc000a0a000) (0xc0008ac8c0) Stream added, broadcasting: 5\nI0512 11:28:15.072587 2751 log.go:172] (0xc000a0a000) Reply frame received for 5\nI0512 11:28:15.136814 2751 log.go:172] (0xc000a0a000) Data frame received for 3\nI0512 11:28:15.136847 2751 log.go:172] (0xc0008b4b40) (3) Data frame handling\nI0512 11:28:15.136858 2751 log.go:172] (0xc0008b4b40) (3) Data frame sent\nI0512 11:28:15.136866 2751 log.go:172] (0xc000a0a000) Data frame received for 5\nI0512 11:28:15.136871 2751 log.go:172] (0xc0008ac8c0) (5) Data frame handling\nI0512 11:28:15.136884 2751 log.go:172] (0xc0008ac8c0) (5) Data frame sent\nI0512 11:28:15.136891 2751 log.go:172] (0xc000a0a000) Data frame received for 5\nI0512 11:28:15.136896 2751 log.go:172] (0xc0008ac8c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0512 11:28:15.136909 2751 log.go:172] (0xc0008ac8c0) (5) Data frame sent\nI0512 11:28:15.136944 2751 log.go:172] (0xc000a0a000) Data frame received for 3\nI0512 11:28:15.136979 2751 log.go:172] (0xc0008b4b40) (3) Data frame handling\nI0512 11:28:15.137376 2751 log.go:172] (0xc000a0a000) Data frame received for 5\nI0512 11:28:15.137406 2751 log.go:172] (0xc0008ac8c0) (5) Data frame handling\nI0512 11:28:15.138847 2751 log.go:172] (0xc000a0a000) Data frame received for 1\nI0512 11:28:15.138874 2751 log.go:172] (0xc0008b4640) (1) Data frame handling\nI0512 11:28:15.138901 2751 log.go:172] (0xc0008b4640) (1) Data frame sent\nI0512 11:28:15.138932 2751 log.go:172] (0xc000a0a000) (0xc0008b4640) Stream removed, broadcasting: 1\nI0512 11:28:15.139011 2751 log.go:172] (0xc000a0a000) Go away received\nI0512 11:28:15.139362 2751 log.go:172] (0xc000a0a000) (0xc0008b4640) Stream removed, broadcasting: 1\nI0512 11:28:15.139393 2751 log.go:172] (0xc000a0a000) (0xc0008b4b40) Stream removed, broadcasting: 3\nI0512 11:28:15.139408 2751 log.go:172] (0xc000a0a000) (0xc0008ac8c0) Stream removed, broadcasting: 5\n" May 12 11:28:15.144: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 11:28:15.144: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 11:28:15.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:28:15.651: INFO: stderr: "I0512 11:28:15.589525 2771 log.go:172] (0xc00058adc0) (0xc0009d0460) Create stream\nI0512 11:28:15.589601 2771 log.go:172] (0xc00058adc0) (0xc0009d0460) Stream added, broadcasting: 1\nI0512 11:28:15.592949 2771 log.go:172] (0xc00058adc0) Reply frame received for 1\nI0512 11:28:15.593018 2771 log.go:172] (0xc00058adc0) (0xc0006e0000) Create stream\nI0512 11:28:15.593036 2771 log.go:172] (0xc00058adc0) (0xc0006e0000) Stream added, broadcasting: 3\nI0512 11:28:15.594126 2771 log.go:172] (0xc00058adc0) Reply frame received for 3\nI0512 11:28:15.594151 2771 log.go:172] (0xc00058adc0) (0xc0005e4320) Create stream\nI0512 11:28:15.594164 2771 log.go:172] (0xc00058adc0) (0xc0005e4320) Stream added, broadcasting: 5\nI0512 11:28:15.594794 2771 log.go:172] (0xc00058adc0) Reply frame received for 5\nI0512 11:28:15.644886 2771 log.go:172] (0xc00058adc0) Data frame received for 3\nI0512 11:28:15.644915 2771 log.go:172] (0xc0006e0000) (3) Data frame handling\nI0512 11:28:15.644937 2771 log.go:172] (0xc0006e0000) (3) Data frame sent\nI0512 11:28:15.644950 2771 log.go:172] (0xc00058adc0) Data frame received for 3\nI0512 11:28:15.644959 2771 log.go:172] (0xc0006e0000) (3) Data frame handling\nI0512 11:28:15.644989 2771 log.go:172] (0xc00058adc0) Data frame received for 5\nI0512 11:28:15.644995 2771 log.go:172] (0xc0005e4320) (5) Data frame handling\nI0512 11:28:15.645013 2771 log.go:172] (0xc0005e4320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0512 11:28:15.645067 2771 log.go:172] (0xc00058adc0) Data frame received for 5\nI0512 11:28:15.645091 2771 log.go:172] (0xc0005e4320) (5) Data frame handling\nI0512 11:28:15.646462 2771 log.go:172] (0xc00058adc0) Data frame received for 1\nI0512 11:28:15.646480 2771 log.go:172] (0xc0009d0460) (1) Data frame handling\nI0512 11:28:15.646490 2771 log.go:172] (0xc0009d0460) (1) Data frame sent\nI0512 11:28:15.646646 2771 log.go:172] (0xc00058adc0) (0xc0009d0460) Stream removed, broadcasting: 1\nI0512 11:28:15.646719 2771 log.go:172] (0xc00058adc0) Go away received\nI0512 11:28:15.646945 2771 log.go:172] (0xc00058adc0) (0xc0009d0460) Stream removed, broadcasting: 1\nI0512 11:28:15.646958 2771 log.go:172] (0xc00058adc0) (0xc0006e0000) Stream removed, broadcasting: 3\nI0512 11:28:15.646966 2771 log.go:172] (0xc00058adc0) (0xc0005e4320) Stream removed, broadcasting: 5\n" May 12 11:28:15.651: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 11:28:15.651: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 11:28:15.767: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 11:28:15.767: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 11:28:15.767: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 12 11:28:15.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 11:28:16.610: INFO: stderr: "I0512 11:28:16.085245 2790 log.go:172] (0xc00003ad10) (0xc00039bc20) Create stream\nI0512 11:28:16.085308 2790 log.go:172] (0xc00003ad10) (0xc00039bc20) Stream added, broadcasting: 1\nI0512 11:28:16.087466 2790 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0512 11:28:16.087500 2790 log.go:172] (0xc00003ad10) (0xc00012f900) Create stream\nI0512 11:28:16.087512 2790 log.go:172] (0xc00003ad10) (0xc00012f900) Stream added, broadcasting: 3\nI0512 11:28:16.088426 2790 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0512 11:28:16.088470 2790 log.go:172] (0xc00003ad10) (0xc0006c4e60) Create stream\nI0512 11:28:16.088485 2790 log.go:172] (0xc00003ad10) (0xc0006c4e60) Stream added, broadcasting: 5\nI0512 11:28:16.089501 2790 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0512 11:28:16.149736 2790 log.go:172] (0xc00003ad10) Data frame received for 5\nI0512 11:28:16.149762 2790 log.go:172] (0xc0006c4e60) (5) Data frame handling\nI0512 11:28:16.149777 2790 log.go:172] (0xc0006c4e60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 11:28:16.601751 2790 log.go:172] (0xc00003ad10) Data frame received for 3\nI0512 11:28:16.601787 2790 log.go:172] (0xc00012f900) (3) Data frame handling\nI0512 11:28:16.601808 2790 log.go:172] (0xc00012f900) (3) Data frame sent\nI0512 11:28:16.602404 2790 log.go:172] (0xc00003ad10) Data frame received for 3\nI0512 11:28:16.602546 2790 log.go:172] (0xc00012f900) (3) Data frame handling\nI0512 11:28:16.602656 2790 log.go:172] (0xc00003ad10) Data frame received for 5\nI0512 11:28:16.602737 2790 log.go:172] (0xc0006c4e60) (5) Data frame handling\nI0512 11:28:16.604531 2790 log.go:172] (0xc00003ad10) Data frame received for 1\nI0512 11:28:16.604559 2790 log.go:172] (0xc00039bc20) (1) Data frame handling\nI0512 11:28:16.604580 2790 log.go:172] (0xc00039bc20) (1) Data frame sent\nI0512 11:28:16.604600 2790 log.go:172] (0xc00003ad10) (0xc00039bc20) Stream removed, broadcasting: 1\nI0512 11:28:16.604639 2790 log.go:172] (0xc00003ad10) Go away received\nI0512 11:28:16.605069 2790 log.go:172] (0xc00003ad10) (0xc00039bc20) Stream removed, broadcasting: 1\nI0512 11:28:16.605105 2790 log.go:172] (0xc00003ad10) (0xc00012f900) Stream removed, broadcasting: 3\nI0512 11:28:16.605340 2790 log.go:172] (0xc00003ad10) (0xc0006c4e60) Stream removed, broadcasting: 5\n" May 12 11:28:16.610: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 11:28:16.611: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 11:28:16.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 11:28:18.942: INFO: stderr: "I0512 11:28:18.218545 2811 log.go:172] (0xc00003a840) (0xc00054e960) Create stream\nI0512 11:28:18.218607 2811 log.go:172] (0xc00003a840) (0xc00054e960) Stream added, broadcasting: 1\nI0512 11:28:18.220782 2811 log.go:172] (0xc00003a840) Reply frame received for 1\nI0512 11:28:18.220821 2811 log.go:172] (0xc00003a840) (0xc000416280) Create stream\nI0512 11:28:18.220833 2811 log.go:172] (0xc00003a840) (0xc000416280) Stream added, broadcasting: 3\nI0512 11:28:18.221861 2811 log.go:172] (0xc00003a840) Reply frame received for 3\nI0512 11:28:18.221890 2811 log.go:172] (0xc00003a840) (0xc00069eb40) Create stream\nI0512 11:28:18.221901 2811 log.go:172] (0xc00003a840) (0xc00069eb40) Stream added, broadcasting: 5\nI0512 11:28:18.222675 2811 log.go:172] (0xc00003a840) Reply frame received for 5\nI0512 11:28:18.287813 2811 log.go:172] (0xc00003a840) Data frame received for 5\nI0512 11:28:18.287845 2811 log.go:172] (0xc00069eb40) (5) Data frame handling\nI0512 11:28:18.287871 2811 log.go:172] (0xc00069eb40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 11:28:18.932952 2811 log.go:172] (0xc00003a840) Data frame received for 3\nI0512 11:28:18.932985 2811 log.go:172] (0xc000416280) (3) Data frame handling\nI0512 11:28:18.932999 2811 log.go:172] (0xc000416280) (3) Data frame sent\nI0512 11:28:18.933750 2811 log.go:172] (0xc00003a840) Data frame received for 5\nI0512 11:28:18.933765 2811 log.go:172] (0xc00069eb40) (5) Data frame handling\nI0512 11:28:18.933786 2811 log.go:172] (0xc00003a840) Data frame received for 3\nI0512 11:28:18.933795 2811 log.go:172] (0xc000416280) (3) Data frame handling\nI0512 11:28:18.935550 2811 log.go:172] (0xc00003a840) Data frame received for 1\nI0512 11:28:18.935595 2811 log.go:172] (0xc00054e960) (1) Data frame handling\nI0512 11:28:18.935631 2811 log.go:172] (0xc00054e960) (1) Data frame sent\nI0512 11:28:18.935677 2811 log.go:172] (0xc00003a840) (0xc00054e960) Stream removed, broadcasting: 1\nI0512 11:28:18.935718 2811 log.go:172] (0xc00003a840) Go away received\nI0512 11:28:18.936074 2811 log.go:172] (0xc00003a840) (0xc00054e960) Stream removed, broadcasting: 1\nI0512 11:28:18.936094 2811 log.go:172] (0xc00003a840) (0xc000416280) Stream removed, broadcasting: 3\nI0512 11:28:18.936103 2811 log.go:172] (0xc00003a840) (0xc00069eb40) Stream removed, broadcasting: 5\n" May 12 11:28:18.942: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 11:28:18.942: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 11:28:18.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 11:28:20.474: INFO: stderr: "I0512 11:28:20.119635 2831 log.go:172] (0xc0009520b0) (0xc0006c9400) Create stream\nI0512 11:28:20.119698 2831 log.go:172] (0xc0009520b0) (0xc0006c9400) Stream added, broadcasting: 1\nI0512 11:28:20.122003 2831 log.go:172] (0xc0009520b0) Reply frame received for 1\nI0512 11:28:20.122056 2831 log.go:172] (0xc0009520b0) (0xc0006c9ae0) Create stream\nI0512 11:28:20.122074 2831 log.go:172] (0xc0009520b0) (0xc0006c9ae0) Stream added, broadcasting: 3\nI0512 11:28:20.122932 2831 log.go:172] (0xc0009520b0) Reply frame received for 3\nI0512 11:28:20.122978 2831 log.go:172] (0xc0009520b0) (0xc000184140) Create stream\nI0512 11:28:20.122991 2831 log.go:172] (0xc0009520b0) (0xc000184140) Stream added, broadcasting: 5\nI0512 11:28:20.123802 2831 log.go:172] (0xc0009520b0) Reply frame received for 5\nI0512 11:28:20.177783 2831 log.go:172] (0xc0009520b0) Data frame received for 5\nI0512 11:28:20.177820 2831 log.go:172] (0xc000184140) (5) Data frame handling\nI0512 11:28:20.177843 2831 log.go:172] (0xc000184140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 11:28:20.466534 2831 log.go:172] (0xc0009520b0) Data frame received for 3\nI0512 11:28:20.466565 2831 log.go:172] (0xc0006c9ae0) (3) Data frame handling\nI0512 11:28:20.466577 2831 log.go:172] (0xc0006c9ae0) (3) Data frame sent\nI0512 11:28:20.466798 2831 log.go:172] (0xc0009520b0) Data frame received for 3\nI0512 11:28:20.466820 2831 log.go:172] (0xc0006c9ae0) (3) Data frame handling\nI0512 11:28:20.466917 2831 log.go:172] (0xc0009520b0) Data frame received for 5\nI0512 11:28:20.466940 2831 log.go:172] (0xc000184140) (5) Data frame handling\nI0512 11:28:20.468877 2831 log.go:172] (0xc0009520b0) Data frame received for 1\nI0512 11:28:20.468903 2831 log.go:172] (0xc0006c9400) (1) Data frame handling\nI0512 11:28:20.468939 2831 log.go:172] (0xc0006c9400) (1) Data frame sent\nI0512 11:28:20.468957 2831 log.go:172] (0xc0009520b0) (0xc0006c9400) Stream removed, broadcasting: 1\nI0512 11:28:20.468970 2831 log.go:172] (0xc0009520b0) Go away received\nI0512 11:28:20.469415 2831 log.go:172] (0xc0009520b0) (0xc0006c9400) Stream removed, broadcasting: 1\nI0512 11:28:20.469430 2831 log.go:172] (0xc0009520b0) (0xc0006c9ae0) Stream removed, broadcasting: 3\nI0512 11:28:20.469436 2831 log.go:172] (0xc0009520b0) (0xc000184140) Stream removed, broadcasting: 5\n" May 12 11:28:20.474: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 11:28:20.474: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 11:28:20.474: INFO: Waiting for statefulset status.replicas updated to 0 May 12 11:28:21.040: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 12 11:28:31.071: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 11:28:31.071: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 11:28:31.071: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 11:28:31.108: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:28:31.108: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC }] May 12 11:28:31.108: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:31.108: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:31.108: INFO: May 12 11:28:31.108: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:28:32.113: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:28:32.113: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC }] May 12 11:28:32.114: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:32.114: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:32.114: INFO: May 12 11:28:32.114: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:28:33.278: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:28:33.278: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC }] May 12 11:28:33.278: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:33.278: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:33.278: INFO: May 12 11:28:33.278: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:28:34.282: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:28:34.282: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC }] May 12 11:28:34.282: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:34.282: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:34.282: INFO: May 12 11:28:34.282: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:28:35.286: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:28:35.286: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC }] May 12 11:28:35.286: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:35.286: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:35.286: INFO: May 12 11:28:35.286: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:28:36.290: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:28:36.290: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC }] May 12 11:28:36.290: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:36.290: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:36.290: INFO: May 12 11:28:36.290: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:28:37.296: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:28:37.296: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC }] May 12 11:28:37.296: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:37.296: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:37.296: INFO: May 12 11:28:37.296: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:28:38.302: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:28:38.302: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC }] May 12 11:28:38.302: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:38.302: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:38.302: INFO: May 12 11:28:38.302: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:28:39.307: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:28:39.307: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC }] May 12 11:28:39.307: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:39.307: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:39.307: INFO: May 12 11:28:39.307: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:28:40.312: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:28:40.312: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:27:41 +0000 UTC }] May 12 11:28:40.312: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:40.312: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:28:01 +0000 UTC }] May 12 11:28:40.312: INFO: May 12 11:28:40.312: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4047 May 12 11:28:41.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:28:41.430: INFO: rc: 1 May 12 11:28:41.430: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 12 11:28:51.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:28:51.524: INFO: rc: 1 May 12 11:28:51.524: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:29:01.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:29:01.616: INFO: rc: 1 May 12 11:29:01.616: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:29:11.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:29:11.701: INFO: rc: 1 May 12 11:29:11.701: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:29:21.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:29:21.802: INFO: rc: 1 May 12 11:29:21.802: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:29:31.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:29:32.120: INFO: rc: 1 May 12 11:29:32.120: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:29:42.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:29:42.218: INFO: rc: 1 May 12 11:29:42.218: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:29:52.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:29:52.327: INFO: rc: 1 May 12 11:29:52.327: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:30:02.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:30:02.427: INFO: rc: 1 May 12 11:30:02.427: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:30:12.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:30:12.612: INFO: rc: 1 May 12 11:30:12.612: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:30:22.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:30:22.713: INFO: rc: 1 May 12 11:30:22.713: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:30:32.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:30:32.800: INFO: rc: 1 May 12 11:30:32.800: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:30:42.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:30:43.039: INFO: rc: 1 May 12 11:30:43.039: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:30:53.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:30:53.144: INFO: rc: 1 May 12 11:30:53.144: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:31:03.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:31:03.233: INFO: rc: 1 May 12 11:31:03.233: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:31:13.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:31:13.445: INFO: rc: 1 May 12 11:31:13.445: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:31:23.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:31:23.620: INFO: rc: 1 May 12 11:31:23.620: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:31:33.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:31:33.771: INFO: rc: 1 May 12 11:31:33.771: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:31:43.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:31:43.970: INFO: rc: 1 May 12 11:31:43.970: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:31:53.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:31:54.075: INFO: rc: 1 May 12 11:31:54.075: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:32:04.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:32:04.724: INFO: rc: 1 May 12 11:32:04.724: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:32:14.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:32:14.830: INFO: rc: 1 May 12 11:32:14.830: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:32:24.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:32:24.933: INFO: rc: 1 May 12 11:32:24.933: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:32:34.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:32:35.023: INFO: rc: 1 May 12 11:32:35.023: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:32:45.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:32:45.126: INFO: rc: 1 May 12 11:32:45.126: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:32:55.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:32:55.220: INFO: rc: 1 May 12 11:32:55.220: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:33:05.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:33:05.536: INFO: rc: 1 May 12 11:33:05.536: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:33:15.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:33:15.659: INFO: rc: 1 May 12 11:33:15.659: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:33:25.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:33:25.748: INFO: rc: 1 May 12 11:33:25.748: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:33:35.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:33:35.844: INFO: rc: 1 May 12 11:33:35.844: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 11:33:45.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 11:33:45.942: INFO: rc: 1 May 12 11:33:45.942: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 12 11:33:45.942: INFO: Scaling statefulset ss to 0 May 12 11:33:45.950: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 12 11:33:45.952: INFO: Deleting all statefulset in ns statefulset-4047 May 12 11:33:45.955: INFO: Scaling statefulset ss to 0 May 12 11:33:45.963: INFO: Waiting for statefulset status.replicas updated to 0 May 12 11:33:45.965: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:33:46.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4047" for this suite. • [SLOW TEST:365.562 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":165,"skipped":2972,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:33:46.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 11:33:46.454: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd1c5497-ec29-4fd8-b326-04d67d18085a" in namespace "downward-api-2439" to be "Succeeded or Failed" May 12 11:33:46.457: INFO: Pod "downwardapi-volume-cd1c5497-ec29-4fd8-b326-04d67d18085a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.103104ms May 12 11:33:48.461: INFO: Pod "downwardapi-volume-cd1c5497-ec29-4fd8-b326-04d67d18085a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007376347s May 12 11:33:50.896: INFO: Pod "downwardapi-volume-cd1c5497-ec29-4fd8-b326-04d67d18085a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441757477s May 12 11:33:53.149: INFO: Pod "downwardapi-volume-cd1c5497-ec29-4fd8-b326-04d67d18085a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.695201073s May 12 11:33:55.851: INFO: Pod "downwardapi-volume-cd1c5497-ec29-4fd8-b326-04d67d18085a": Phase="Running", Reason="", readiness=true. Elapsed: 9.397105736s May 12 11:33:58.124: INFO: Pod "downwardapi-volume-cd1c5497-ec29-4fd8-b326-04d67d18085a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.66952011s STEP: Saw pod success May 12 11:33:58.124: INFO: Pod "downwardapi-volume-cd1c5497-ec29-4fd8-b326-04d67d18085a" satisfied condition "Succeeded or Failed" May 12 11:33:58.170: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-cd1c5497-ec29-4fd8-b326-04d67d18085a container client-container: STEP: delete the pod May 12 11:33:59.545: INFO: Waiting for pod downwardapi-volume-cd1c5497-ec29-4fd8-b326-04d67d18085a to disappear May 12 11:33:59.638: INFO: Pod downwardapi-volume-cd1c5497-ec29-4fd8-b326-04d67d18085a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:33:59.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2439" for this suite. • [SLOW TEST:13.767 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":166,"skipped":2989,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:34:00.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 12 11:34:16.207: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:34:16.513: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:34:18.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:34:18.516: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:34:20.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:34:20.544: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:34:22.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:34:22.517: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:34:24.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:34:24.516: INFO: Pod pod-with-poststart-exec-hook still exists May 12 11:34:26.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 11:34:26.517: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:34:26.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9423" for this suite. • [SLOW TEST:26.367 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":167,"skipped":2989,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:34:26.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-62f97652-fb11-4c80-b040-d9b9f0b9646b STEP: Creating a pod to test consume secrets May 12 11:34:26.694: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-95235a44-9dde-435c-8f46-8b4c721e40c7" in namespace "projected-156" to be "Succeeded or Failed" May 12 11:34:26.818: INFO: Pod "pod-projected-secrets-95235a44-9dde-435c-8f46-8b4c721e40c7": Phase="Pending", Reason="", readiness=false. Elapsed: 123.857996ms May 12 11:34:29.023: INFO: Pod "pod-projected-secrets-95235a44-9dde-435c-8f46-8b4c721e40c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32847519s May 12 11:34:31.057: INFO: Pod "pod-projected-secrets-95235a44-9dde-435c-8f46-8b4c721e40c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.36293512s May 12 11:34:33.405: INFO: Pod "pod-projected-secrets-95235a44-9dde-435c-8f46-8b4c721e40c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.710140064s May 12 11:34:35.459: INFO: Pod "pod-projected-secrets-95235a44-9dde-435c-8f46-8b4c721e40c7": Phase="Running", Reason="", readiness=true. Elapsed: 8.764996899s May 12 11:34:37.463: INFO: Pod "pod-projected-secrets-95235a44-9dde-435c-8f46-8b4c721e40c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.768360401s STEP: Saw pod success May 12 11:34:37.463: INFO: Pod "pod-projected-secrets-95235a44-9dde-435c-8f46-8b4c721e40c7" satisfied condition "Succeeded or Failed" May 12 11:34:37.466: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-95235a44-9dde-435c-8f46-8b4c721e40c7 container projected-secret-volume-test: STEP: delete the pod May 12 11:34:38.360: INFO: Waiting for pod pod-projected-secrets-95235a44-9dde-435c-8f46-8b4c721e40c7 to disappear May 12 11:34:38.404: INFO: Pod pod-projected-secrets-95235a44-9dde-435c-8f46-8b4c721e40c7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:34:38.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-156" for this suite. • [SLOW TEST:11.884 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":168,"skipped":2995,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:34:38.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 11:34:45.149: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:34:45.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2388" for this suite. • [SLOW TEST:7.450 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":169,"skipped":3006,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:34:45.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:34:53.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1356" for this suite. • [SLOW TEST:8.125 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":170,"skipped":3018,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:34:53.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 12 11:34:54.289: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9317 /api/v1/namespaces/watch-9317/configmaps/e2e-watch-test-resource-version 96765f27-3eae-4578-a01e-838c389d6d6b 3796542 0 2020-05-12 11:34:54 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-12 11:34:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 12 11:34:54.289: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9317 /api/v1/namespaces/watch-9317/configmaps/e2e-watch-test-resource-version 96765f27-3eae-4578-a01e-838c389d6d6b 3796543 0 2020-05-12 11:34:54 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-12 11:34:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:34:54.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9317" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":171,"skipped":3019,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:34:54.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:35:03.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3610" for this suite. • [SLOW TEST:9.050 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":172,"skipped":3021,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:35:03.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 11:35:04.371: INFO: Waiting up to 5m0s for pod "pod-76cbf6f6-dfbc-452b-b288-84c25d5baa49" in namespace "emptydir-3153" to be "Succeeded or Failed" May 12 11:35:04.383: INFO: Pod "pod-76cbf6f6-dfbc-452b-b288-84c25d5baa49": Phase="Pending", Reason="", readiness=false. Elapsed: 12.0931ms May 12 11:35:06.785: INFO: Pod "pod-76cbf6f6-dfbc-452b-b288-84c25d5baa49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.414341115s May 12 11:35:08.963: INFO: Pod "pod-76cbf6f6-dfbc-452b-b288-84c25d5baa49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.592119246s May 12 11:35:10.992: INFO: Pod "pod-76cbf6f6-dfbc-452b-b288-84c25d5baa49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.621096365s May 12 11:35:12.995: INFO: Pod "pod-76cbf6f6-dfbc-452b-b288-84c25d5baa49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.624499659s STEP: Saw pod success May 12 11:35:12.995: INFO: Pod "pod-76cbf6f6-dfbc-452b-b288-84c25d5baa49" satisfied condition "Succeeded or Failed" May 12 11:35:12.998: INFO: Trying to get logs from node latest-worker2 pod pod-76cbf6f6-dfbc-452b-b288-84c25d5baa49 container test-container: STEP: delete the pod May 12 11:35:13.136: INFO: Waiting for pod pod-76cbf6f6-dfbc-452b-b288-84c25d5baa49 to disappear May 12 11:35:13.351: INFO: Pod pod-76cbf6f6-dfbc-452b-b288-84c25d5baa49 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:35:13.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3153" for this suite. • [SLOW TEST:9.984 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":173,"skipped":3022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:35:13.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:36:02.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1692" for this suite. • [SLOW TEST:48.784 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":174,"skipped":3047,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:36:02.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-tdjn STEP: Creating a pod to test atomic-volume-subpath May 12 11:36:02.216: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tdjn" in namespace "subpath-2509" to be "Succeeded or Failed" May 12 11:36:02.221: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.909621ms May 12 11:36:04.225: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008783873s May 12 11:36:06.371: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154947973s May 12 11:36:08.552: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.335707362s May 12 11:36:10.568: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Running", Reason="", readiness=true. Elapsed: 8.352028528s May 12 11:36:12.886: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Running", Reason="", readiness=true. Elapsed: 10.669615425s May 12 11:36:14.889: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Running", Reason="", readiness=true. Elapsed: 12.672931261s May 12 11:36:16.906: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Running", Reason="", readiness=true. Elapsed: 14.689957505s May 12 11:36:19.071: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Running", Reason="", readiness=true. Elapsed: 16.854402972s May 12 11:36:21.075: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Running", Reason="", readiness=true. Elapsed: 18.858838597s May 12 11:36:23.078: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Running", Reason="", readiness=true. Elapsed: 20.86186751s May 12 11:36:25.081: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Running", Reason="", readiness=true. Elapsed: 22.864644241s May 12 11:36:27.115: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Running", Reason="", readiness=true. Elapsed: 24.898476878s May 12 11:36:29.118: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Running", Reason="", readiness=true. Elapsed: 26.902060873s May 12 11:36:31.122: INFO: Pod "pod-subpath-test-configmap-tdjn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.905551198s STEP: Saw pod success May 12 11:36:31.122: INFO: Pod "pod-subpath-test-configmap-tdjn" satisfied condition "Succeeded or Failed" May 12 11:36:31.124: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-tdjn container test-container-subpath-configmap-tdjn: STEP: delete the pod May 12 11:36:31.268: INFO: Waiting for pod pod-subpath-test-configmap-tdjn to disappear May 12 11:36:31.366: INFO: Pod pod-subpath-test-configmap-tdjn no longer exists STEP: Deleting pod pod-subpath-test-configmap-tdjn May 12 11:36:31.366: INFO: Deleting pod "pod-subpath-test-configmap-tdjn" in namespace "subpath-2509" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:36:31.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2509" for this suite. • [SLOW TEST:29.278 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":175,"skipped":3054,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:36:31.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 12 11:36:31.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-5527 -- logs-generator --log-lines-total 100 --run-duration 20s' May 12 11:36:38.925: INFO: stderr: "" May 12 11:36:38.925: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 12 11:36:38.925: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 12 11:36:38.925: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5527" to be "running and ready, or succeeded" May 12 11:36:38.932: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.545967ms May 12 11:36:41.232: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.306601184s May 12 11:36:43.250: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325042789s May 12 11:36:45.323: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.397128751s May 12 11:36:45.323: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 12 11:36:45.323: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 12 11:36:45.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5527' May 12 11:36:45.512: INFO: stderr: "" May 12 11:36:45.512: INFO: stdout: "I0512 11:36:43.664222 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/wnp 268\nI0512 11:36:43.864326 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/ljl 335\nI0512 11:36:44.064390 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/k5b 553\nI0512 11:36:44.264451 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/tv8z 417\nI0512 11:36:44.464338 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/qxm 449\nI0512 11:36:44.664358 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/pcx 322\nI0512 11:36:44.864343 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/fcwj 329\nI0512 11:36:45.064320 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/cfzp 559\nI0512 11:36:45.264312 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/jv2 450\nI0512 11:36:45.464340 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/qm7 551\n" STEP: limiting log lines May 12 11:36:45.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5527 --tail=1' May 12 11:36:45.618: INFO: stderr: "" May 12 11:36:45.618: INFO: stdout: "I0512 11:36:45.464340 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/qm7 551\n" May 12 11:36:45.618: INFO: got output "I0512 11:36:45.464340 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/qm7 551\n" STEP: limiting log bytes May 12 11:36:45.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5527 --limit-bytes=1' May 12 11:36:46.017: INFO: stderr: "" May 12 11:36:46.017: INFO: stdout: "I" May 12 11:36:46.017: INFO: got output "I" STEP: exposing timestamps May 12 11:36:46.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5527 --tail=1 --timestamps' May 12 11:36:46.228: INFO: stderr: "" May 12 11:36:46.228: INFO: stdout: "2020-05-12T11:36:46.064472624Z I0512 11:36:46.064332 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/mfch 428\n" May 12 11:36:46.228: INFO: got output "2020-05-12T11:36:46.064472624Z I0512 11:36:46.064332 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/mfch 428\n" STEP: restricting to a time range May 12 11:36:48.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5527 --since=1s' May 12 11:36:49.033: INFO: stderr: "" May 12 11:36:49.033: INFO: stdout: "I0512 11:36:48.064339 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/567n 452\nI0512 11:36:48.264376 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/kzk 582\nI0512 11:36:48.464402 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/9v7g 240\nI0512 11:36:48.664352 1 logs_generator.go:76] 25 POST /api/v1/namespaces/ns/pods/gv52 380\nI0512 11:36:48.864348 1 logs_generator.go:76] 26 GET /api/v1/namespaces/ns/pods/ncv 314\n" May 12 11:36:49.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5527 --since=24h' May 12 11:36:49.482: INFO: stderr: "" May 12 11:36:49.482: INFO: stdout: "I0512 11:36:43.664222 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/wnp 268\nI0512 11:36:43.864326 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/ljl 335\nI0512 11:36:44.064390 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/k5b 553\nI0512 11:36:44.264451 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/tv8z 417\nI0512 11:36:44.464338 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/qxm 449\nI0512 11:36:44.664358 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/pcx 322\nI0512 11:36:44.864343 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/fcwj 329\nI0512 11:36:45.064320 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/cfzp 559\nI0512 11:36:45.264312 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/jv2 450\nI0512 11:36:45.464340 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/qm7 551\nI0512 11:36:45.664334 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/l6s 457\nI0512 11:36:45.864323 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/kffr 564\nI0512 11:36:46.064332 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/mfch 428\nI0512 11:36:46.264323 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/hw4j 568\nI0512 11:36:46.464365 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/5tn 380\nI0512 11:36:46.664326 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/xrn 329\nI0512 11:36:46.864417 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/q9c 280\nI0512 11:36:47.064350 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/gfb 259\nI0512 11:36:47.264331 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/z8j 459\nI0512 11:36:47.464321 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/hfr9 414\nI0512 11:36:47.664343 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/zlm7 307\nI0512 11:36:47.866764 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/kl4d 402\nI0512 11:36:48.064339 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/567n 452\nI0512 11:36:48.264376 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/kzk 582\nI0512 11:36:48.464402 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/9v7g 240\nI0512 11:36:48.664352 1 logs_generator.go:76] 25 POST /api/v1/namespaces/ns/pods/gv52 380\nI0512 11:36:48.864348 1 logs_generator.go:76] 26 GET /api/v1/namespaces/ns/pods/ncv 314\nI0512 11:36:49.064357 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/default/pods/597 330\nI0512 11:36:49.264294 1 logs_generator.go:76] 28 POST /api/v1/namespaces/default/pods/thnw 583\nI0512 11:36:49.464326 1 logs_generator.go:76] 29 GET /api/v1/namespaces/default/pods/kxx 503\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 12 11:36:49.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5527' May 12 11:36:55.252: INFO: stderr: "" May 12 11:36:55.252: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:36:55.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5527" for this suite. • [SLOW TEST:23.838 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":176,"skipped":3071,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:36:55.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7987 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7987 I0512 11:36:55.559047 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7987, replica count: 2 I0512 11:36:58.609520 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 11:37:01.609719 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 11:37:04.610008 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 11:37:04.610: INFO: Creating new exec pod May 12 11:37:11.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7987 execpod2wbsl -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 12 11:37:11.937: INFO: stderr: "I0512 11:37:11.836404 3615 log.go:172] (0xc000b8d3f0) (0xc0006d4e60) Create stream\nI0512 11:37:11.836451 3615 log.go:172] (0xc000b8d3f0) (0xc0006d4e60) Stream added, broadcasting: 1\nI0512 11:37:11.840620 3615 log.go:172] (0xc000b8d3f0) Reply frame received for 1\nI0512 11:37:11.840661 3615 log.go:172] (0xc000b8d3f0) (0xc0006ab4a0) Create stream\nI0512 11:37:11.840671 3615 log.go:172] (0xc000b8d3f0) (0xc0006ab4a0) Stream added, broadcasting: 3\nI0512 11:37:11.841775 3615 log.go:172] (0xc000b8d3f0) Reply frame received for 3\nI0512 11:37:11.841813 3615 log.go:172] (0xc000b8d3f0) (0xc000650c80) Create stream\nI0512 11:37:11.841828 3615 log.go:172] (0xc000b8d3f0) (0xc000650c80) Stream added, broadcasting: 5\nI0512 11:37:11.842913 3615 log.go:172] (0xc000b8d3f0) Reply frame received for 5\nI0512 11:37:11.931178 3615 log.go:172] (0xc000b8d3f0) Data frame received for 5\nI0512 11:37:11.931206 3615 log.go:172] (0xc000650c80) (5) Data frame handling\nI0512 11:37:11.931218 3615 log.go:172] (0xc000650c80) (5) Data frame sent\nI0512 11:37:11.931226 3615 log.go:172] (0xc000b8d3f0) Data frame received for 5\nI0512 11:37:11.931232 3615 log.go:172] (0xc000650c80) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0512 11:37:11.931256 3615 log.go:172] (0xc000b8d3f0) Data frame received for 3\nI0512 11:37:11.931286 3615 log.go:172] (0xc0006ab4a0) (3) Data frame handling\nI0512 11:37:11.932301 3615 log.go:172] (0xc000b8d3f0) Data frame received for 1\nI0512 11:37:11.932317 3615 log.go:172] (0xc0006d4e60) (1) Data frame handling\nI0512 11:37:11.932330 3615 log.go:172] (0xc0006d4e60) (1) Data frame sent\nI0512 11:37:11.932342 3615 log.go:172] (0xc000b8d3f0) (0xc0006d4e60) Stream removed, broadcasting: 1\nI0512 11:37:11.932353 3615 log.go:172] (0xc000b8d3f0) Go away received\nI0512 11:37:11.932752 3615 log.go:172] (0xc000b8d3f0) (0xc0006d4e60) Stream removed, broadcasting: 1\nI0512 11:37:11.932787 3615 log.go:172] (0xc000b8d3f0) (0xc0006ab4a0) Stream removed, broadcasting: 3\nI0512 11:37:11.932808 3615 log.go:172] (0xc000b8d3f0) (0xc000650c80) Stream removed, broadcasting: 5\n" May 12 11:37:11.937: INFO: stdout: "" May 12 11:37:11.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7987 execpod2wbsl -- /bin/sh -x -c nc -zv -t -w 2 10.108.224.244 80' May 12 11:37:12.142: INFO: stderr: "I0512 11:37:12.073771 3636 log.go:172] (0xc0009a7600) (0xc000aec6e0) Create stream\nI0512 11:37:12.073816 3636 log.go:172] (0xc0009a7600) (0xc000aec6e0) Stream added, broadcasting: 1\nI0512 11:37:12.077885 3636 log.go:172] (0xc0009a7600) Reply frame received for 1\nI0512 11:37:12.077947 3636 log.go:172] (0xc0009a7600) (0xc0005021e0) Create stream\nI0512 11:37:12.077969 3636 log.go:172] (0xc0009a7600) (0xc0005021e0) Stream added, broadcasting: 3\nI0512 11:37:12.078749 3636 log.go:172] (0xc0009a7600) Reply frame received for 3\nI0512 11:37:12.078780 3636 log.go:172] (0xc0009a7600) (0xc00046e460) Create stream\nI0512 11:37:12.078792 3636 log.go:172] (0xc0009a7600) (0xc00046e460) Stream added, broadcasting: 5\nI0512 11:37:12.079604 3636 log.go:172] (0xc0009a7600) Reply frame received for 5\nI0512 11:37:12.135865 3636 log.go:172] (0xc0009a7600) Data frame received for 5\nI0512 11:37:12.135896 3636 log.go:172] (0xc00046e460) (5) Data frame handling\nI0512 11:37:12.135911 3636 log.go:172] (0xc00046e460) (5) Data frame sent\nI0512 11:37:12.135919 3636 log.go:172] (0xc0009a7600) Data frame received for 5\nI0512 11:37:12.135945 3636 log.go:172] (0xc00046e460) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.224.244 80\nConnection to 10.108.224.244 80 port [tcp/http] succeeded!\nI0512 11:37:12.135978 3636 log.go:172] (0xc0009a7600) Data frame received for 3\nI0512 11:37:12.135995 3636 log.go:172] (0xc0005021e0) (3) Data frame handling\nI0512 11:37:12.137826 3636 log.go:172] (0xc0009a7600) Data frame received for 1\nI0512 11:37:12.137840 3636 log.go:172] (0xc000aec6e0) (1) Data frame handling\nI0512 11:37:12.137854 3636 log.go:172] (0xc000aec6e0) (1) Data frame sent\nI0512 11:37:12.137863 3636 log.go:172] (0xc0009a7600) (0xc000aec6e0) Stream removed, broadcasting: 1\nI0512 11:37:12.137876 3636 log.go:172] (0xc0009a7600) Go away received\nI0512 11:37:12.138269 3636 log.go:172] (0xc0009a7600) (0xc000aec6e0) Stream removed, broadcasting: 1\nI0512 11:37:12.138314 3636 log.go:172] (0xc0009a7600) (0xc0005021e0) Stream removed, broadcasting: 3\nI0512 11:37:12.138330 3636 log.go:172] (0xc0009a7600) (0xc00046e460) Stream removed, broadcasting: 5\n" May 12 11:37:12.142: INFO: stdout: "" May 12 11:37:12.142: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:37:12.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7987" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:17.132 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":177,"skipped":3073,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:37:12.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-7b2894f7-8c9d-4cee-9b07-9511d7f88768 STEP: Creating a pod to test consume configMaps May 12 11:37:12.504: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b56a5f74-f5a3-4860-a991-6ed5bc22afc8" in namespace "projected-2385" to be "Succeeded or Failed" May 12 11:37:12.628: INFO: Pod "pod-projected-configmaps-b56a5f74-f5a3-4860-a991-6ed5bc22afc8": Phase="Pending", Reason="", readiness=false. Elapsed: 123.334204ms May 12 11:37:14.771: INFO: Pod "pod-projected-configmaps-b56a5f74-f5a3-4860-a991-6ed5bc22afc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267072644s May 12 11:37:16.891: INFO: Pod "pod-projected-configmaps-b56a5f74-f5a3-4860-a991-6ed5bc22afc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386924415s May 12 11:37:19.091: INFO: Pod "pod-projected-configmaps-b56a5f74-f5a3-4860-a991-6ed5bc22afc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.586444762s STEP: Saw pod success May 12 11:37:19.091: INFO: Pod "pod-projected-configmaps-b56a5f74-f5a3-4860-a991-6ed5bc22afc8" satisfied condition "Succeeded or Failed" May 12 11:37:19.165: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-b56a5f74-f5a3-4860-a991-6ed5bc22afc8 container projected-configmap-volume-test: STEP: delete the pod May 12 11:37:19.752: INFO: Waiting for pod pod-projected-configmaps-b56a5f74-f5a3-4860-a991-6ed5bc22afc8 to disappear May 12 11:37:19.849: INFO: Pod pod-projected-configmaps-b56a5f74-f5a3-4860-a991-6ed5bc22afc8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:37:19.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2385" for this suite. • [SLOW TEST:7.494 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":178,"skipped":3079,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:37:19.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-59befe5d-1bb6-478b-a69c-3edd457cbd60 STEP: Creating secret with name secret-projected-all-test-volume-94cbcf72-f666-4bdc-9b59-d0cc48223201 STEP: Creating a pod to test Check all projections for projected volume plugin May 12 11:37:20.134: INFO: Waiting up to 5m0s for pod "projected-volume-1e6c4e96-b5ba-403d-83cb-54b47b662e44" in namespace "projected-7941" to be "Succeeded or Failed" May 12 11:37:20.239: INFO: Pod "projected-volume-1e6c4e96-b5ba-403d-83cb-54b47b662e44": Phase="Pending", Reason="", readiness=false. Elapsed: 105.178233ms May 12 11:37:22.516: INFO: Pod "projected-volume-1e6c4e96-b5ba-403d-83cb-54b47b662e44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381291909s May 12 11:37:24.824: INFO: Pod "projected-volume-1e6c4e96-b5ba-403d-83cb-54b47b662e44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.689719927s May 12 11:37:27.186: INFO: Pod "projected-volume-1e6c4e96-b5ba-403d-83cb-54b47b662e44": Phase="Pending", Reason="", readiness=false. Elapsed: 7.052100894s May 12 11:37:29.191: INFO: Pod "projected-volume-1e6c4e96-b5ba-403d-83cb-54b47b662e44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.056666591s STEP: Saw pod success May 12 11:37:29.191: INFO: Pod "projected-volume-1e6c4e96-b5ba-403d-83cb-54b47b662e44" satisfied condition "Succeeded or Failed" May 12 11:37:29.194: INFO: Trying to get logs from node latest-worker2 pod projected-volume-1e6c4e96-b5ba-403d-83cb-54b47b662e44 container projected-all-volume-test: STEP: delete the pod May 12 11:37:29.500: INFO: Waiting for pod projected-volume-1e6c4e96-b5ba-403d-83cb-54b47b662e44 to disappear May 12 11:37:29.515: INFO: Pod projected-volume-1e6c4e96-b5ba-403d-83cb-54b47b662e44 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:37:29.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7941" for this suite. • [SLOW TEST:9.686 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":179,"skipped":3093,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:37:29.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:37:30.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-282" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":180,"skipped":3122,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:37:30.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:37:45.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9073" for this suite. • [SLOW TEST:15.391 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":181,"skipped":3137,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:37:45.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 12 11:37:53.004: INFO: Successfully updated pod "annotationupdatef1bdaffd-bcf2-4195-a4b5-be74bbf775f3" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:37:55.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1745" for this suite. • [SLOW TEST:10.747 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":182,"skipped":3143,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:37:56.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 11:38:05.353: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:38:06.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4009" for this suite. • [SLOW TEST:10.801 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":183,"skipped":3166,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:38:06.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:38:25.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3978" for this suite. • [SLOW TEST:19.021 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":184,"skipped":3186,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:38:25.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 12 11:38:26.775: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7229" to be "Succeeded or Failed" May 12 11:38:26.799: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 23.952362ms May 12 11:38:29.114: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338688909s May 12 11:38:31.257: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482437438s May 12 11:38:33.311: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535910555s May 12 11:38:35.544: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.769365426s May 12 11:38:37.611: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.835551445s STEP: Saw pod success May 12 11:38:37.611: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 12 11:38:37.614: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 12 11:38:37.652: INFO: Waiting for pod pod-host-path-test to disappear May 12 11:38:37.667: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:38:37.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7229" for this suite. • [SLOW TEST:11.702 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":185,"skipped":3230,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:38:37.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 12 11:38:37.881: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 11:38:37.902: INFO: Waiting for terminating namespaces to be deleted... May 12 11:38:37.905: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 12 11:38:37.908: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 12 11:38:37.908: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 12 11:38:37.908: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 12 11:38:37.908: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 12 11:38:37.908: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 12 11:38:37.908: INFO: Container kindnet-cni ready: true, restart count 0 May 12 11:38:37.908: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 12 11:38:37.908: INFO: Container kube-proxy ready: true, restart count 0 May 12 11:38:37.908: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 12 11:38:37.912: INFO: rally-62affaf5-lga7j3a6-5fffh from c-rally-62affaf5-5h9kr014 started at 2020-05-12 11:37:33 +0000 UTC (1 container statuses recorded) May 12 11:38:37.912: INFO: Container rally-62affaf5-lga7j3a6 ready: false, restart count 0 May 12 11:38:37.912: INFO: rally-62affaf5-bwhxjs8u-pdmx7 from c-rally-62affaf5-n989lgey started at 2020-05-12 11:38:08 +0000 UTC (1 container statuses recorded) May 12 11:38:37.912: INFO: Container rally-62affaf5-bwhxjs8u ready: false, restart count 0 May 12 11:38:37.912: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 12 11:38:37.912: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 12 11:38:37.912: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 12 11:38:37.912: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 12 11:38:37.912: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 12 11:38:37.912: INFO: Container kindnet-cni ready: true, restart count 0 May 12 11:38:37.912: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 12 11:38:37.912: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e44f0701f1e4a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e44f071ae08e2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:38:38.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8968" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":186,"skipped":3245,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:38:38.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:38:45.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5200" for this suite. • [SLOW TEST:6.669 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":187,"skipped":3257,"failed":0} SS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:38:45.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:38:46.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9170" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":188,"skipped":3259,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:38:46.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 11:38:46.849: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 12 11:38:46.962: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:47.000: INFO: Number of nodes with available pods: 0 May 12 11:38:47.000: INFO: Node latest-worker is running more than one daemon pod May 12 11:38:48.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:48.007: INFO: Number of nodes with available pods: 0 May 12 11:38:48.007: INFO: Node latest-worker is running more than one daemon pod May 12 11:38:49.128: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:49.916: INFO: Number of nodes with available pods: 0 May 12 11:38:49.916: INFO: Node latest-worker is running more than one daemon pod May 12 11:38:50.288: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:50.292: INFO: Number of nodes with available pods: 0 May 12 11:38:50.292: INFO: Node latest-worker is running more than one daemon pod May 12 11:38:51.447: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:51.690: INFO: Number of nodes with available pods: 0 May 12 11:38:51.690: INFO: Node latest-worker is running more than one daemon pod May 12 11:38:52.151: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:52.221: INFO: Number of nodes with available pods: 0 May 12 11:38:52.221: INFO: Node latest-worker is running more than one daemon pod May 12 11:38:53.060: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:53.063: INFO: Number of nodes with available pods: 2 May 12 11:38:53.063: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 12 11:38:53.298: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:53.298: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:53.302: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:54.323: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:54.323: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:54.327: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:55.353: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:55.353: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:55.357: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:56.306: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:56.306: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:56.456: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:57.341: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:57.341: INFO: Pod daemon-set-2cqg7 is not available May 12 11:38:57.341: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:57.344: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:58.306: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:58.306: INFO: Pod daemon-set-2cqg7 is not available May 12 11:38:58.306: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:58.310: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:38:59.307: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:59.307: INFO: Pod daemon-set-2cqg7 is not available May 12 11:38:59.307: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:38:59.310: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:00.365: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:00.365: INFO: Pod daemon-set-2cqg7 is not available May 12 11:39:00.365: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:00.442: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:01.419: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:01.419: INFO: Pod daemon-set-2cqg7 is not available May 12 11:39:01.419: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:01.422: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:02.485: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:02.485: INFO: Pod daemon-set-2cqg7 is not available May 12 11:39:02.485: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:02.488: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:03.816: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:03.816: INFO: Pod daemon-set-2cqg7 is not available May 12 11:39:03.816: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:03.819: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:04.618: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:04.618: INFO: Pod daemon-set-2cqg7 is not available May 12 11:39:04.618: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:04.622: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:05.383: INFO: Wrong image for pod: daemon-set-2cqg7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:05.383: INFO: Pod daemon-set-2cqg7 is not available May 12 11:39:05.383: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:05.478: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:06.484: INFO: Pod daemon-set-bf6vm is not available May 12 11:39:06.484: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:06.677: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:08.477: INFO: Pod daemon-set-bf6vm is not available May 12 11:39:08.477: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:08.785: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:09.306: INFO: Pod daemon-set-bf6vm is not available May 12 11:39:09.306: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:09.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:10.515: INFO: Pod daemon-set-bf6vm is not available May 12 11:39:10.515: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:10.894: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:11.699: INFO: Pod daemon-set-bf6vm is not available May 12 11:39:11.699: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:11.964: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:12.533: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:12.702: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:13.594: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:13.603: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:14.564: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:14.720: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:15.450: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:15.450: INFO: Pod daemon-set-lk9bm is not available May 12 11:39:15.455: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:16.306: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:16.306: INFO: Pod daemon-set-lk9bm is not available May 12 11:39:16.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:17.307: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:17.307: INFO: Pod daemon-set-lk9bm is not available May 12 11:39:17.311: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:18.305: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:18.305: INFO: Pod daemon-set-lk9bm is not available May 12 11:39:18.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:19.443: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:19.443: INFO: Pod daemon-set-lk9bm is not available May 12 11:39:19.446: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:21.291: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:21.291: INFO: Pod daemon-set-lk9bm is not available May 12 11:39:21.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:22.306: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:22.306: INFO: Pod daemon-set-lk9bm is not available May 12 11:39:22.310: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:23.749: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:23.749: INFO: Pod daemon-set-lk9bm is not available May 12 11:39:23.935: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:24.535: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:24.535: INFO: Pod daemon-set-lk9bm is not available May 12 11:39:24.538: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:25.711: INFO: Wrong image for pod: daemon-set-lk9bm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 12 11:39:25.711: INFO: Pod daemon-set-lk9bm is not available May 12 11:39:25.767: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:26.366: INFO: Pod daemon-set-9sb9x is not available May 12 11:39:26.372: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 12 11:39:26.738: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:26.995: INFO: Number of nodes with available pods: 1 May 12 11:39:26.995: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:28.168: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:28.171: INFO: Number of nodes with available pods: 1 May 12 11:39:28.171: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:29.025: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:29.028: INFO: Number of nodes with available pods: 1 May 12 11:39:29.028: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:29.999: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:30.002: INFO: Number of nodes with available pods: 1 May 12 11:39:30.002: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:31.286: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:31.290: INFO: Number of nodes with available pods: 1 May 12 11:39:31.290: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:32.031: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:32.042: INFO: Number of nodes with available pods: 1 May 12 11:39:32.042: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:33.007: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:39:33.009: INFO: Number of nodes with available pods: 2 May 12 11:39:33.009: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8923, will wait for the garbage collector to delete the pods May 12 11:39:33.079: INFO: Deleting DaemonSet.extensions daemon-set took: 5.548752ms May 12 11:39:33.179: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.180108ms May 12 11:39:45.826: INFO: Number of nodes with available pods: 0 May 12 11:39:45.826: INFO: Number of running nodes: 0, number of available pods: 0 May 12 11:39:45.828: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8923/daemonsets","resourceVersion":"3798373"},"items":null} May 12 11:39:45.830: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8923/pods","resourceVersion":"3798373"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:39:45.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8923" for this suite. • [SLOW TEST:59.598 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":189,"skipped":3270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:39:45.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 11:39:46.238: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 12 11:39:46.244: INFO: Number of nodes with available pods: 0 May 12 11:39:46.244: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 12 11:39:46.369: INFO: Number of nodes with available pods: 0 May 12 11:39:46.369: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:47.373: INFO: Number of nodes with available pods: 0 May 12 11:39:47.373: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:48.613: INFO: Number of nodes with available pods: 0 May 12 11:39:48.613: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:49.455: INFO: Number of nodes with available pods: 0 May 12 11:39:49.455: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:50.719: INFO: Number of nodes with available pods: 0 May 12 11:39:50.719: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:51.503: INFO: Number of nodes with available pods: 0 May 12 11:39:51.503: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:52.546: INFO: Number of nodes with available pods: 0 May 12 11:39:52.546: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:53.491: INFO: Number of nodes with available pods: 1 May 12 11:39:53.491: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 12 11:39:53.920: INFO: Number of nodes with available pods: 1 May 12 11:39:53.920: INFO: Number of running nodes: 0, number of available pods: 1 May 12 11:39:55.006: INFO: Number of nodes with available pods: 0 May 12 11:39:55.006: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 12 11:39:55.757: INFO: Number of nodes with available pods: 0 May 12 11:39:55.758: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:56.935: INFO: Number of nodes with available pods: 0 May 12 11:39:56.935: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:58.060: INFO: Number of nodes with available pods: 0 May 12 11:39:58.060: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:39:59.124: INFO: Number of nodes with available pods: 0 May 12 11:39:59.124: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:00.157: INFO: Number of nodes with available pods: 0 May 12 11:40:00.157: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:00.930: INFO: Number of nodes with available pods: 0 May 12 11:40:00.930: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:01.761: INFO: Number of nodes with available pods: 0 May 12 11:40:01.761: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:02.987: INFO: Number of nodes with available pods: 0 May 12 11:40:02.987: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:04.181: INFO: Number of nodes with available pods: 0 May 12 11:40:04.181: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:05.463: INFO: Number of nodes with available pods: 0 May 12 11:40:05.463: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:05.922: INFO: Number of nodes with available pods: 0 May 12 11:40:05.923: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:06.904: INFO: Number of nodes with available pods: 0 May 12 11:40:06.904: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:07.838: INFO: Number of nodes with available pods: 0 May 12 11:40:07.838: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:08.917: INFO: Number of nodes with available pods: 0 May 12 11:40:08.917: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:09.779: INFO: Number of nodes with available pods: 0 May 12 11:40:09.779: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:10.761: INFO: Number of nodes with available pods: 0 May 12 11:40:10.761: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:11.762: INFO: Number of nodes with available pods: 0 May 12 11:40:11.762: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:40:12.875: INFO: Number of nodes with available pods: 1 May 12 11:40:12.875: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5051, will wait for the garbage collector to delete the pods May 12 11:40:12.938: INFO: Deleting DaemonSet.extensions daemon-set took: 7.371582ms May 12 11:40:13.239: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.279922ms May 12 11:40:25.827: INFO: Number of nodes with available pods: 0 May 12 11:40:25.827: INFO: Number of running nodes: 0, number of available pods: 0 May 12 11:40:25.831: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5051/daemonsets","resourceVersion":"3798615"},"items":null} May 12 11:40:25.834: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5051/pods","resourceVersion":"3798615"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:40:25.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5051" for this suite. • [SLOW TEST:40.562 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":190,"skipped":3296,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:40:26.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 11:40:26.753: INFO: Waiting up to 5m0s for pod "pod-3527ff04-66ca-40a2-8011-5c84a713d05a" in namespace "emptydir-8240" to be "Succeeded or Failed" May 12 11:40:26.775: INFO: Pod "pod-3527ff04-66ca-40a2-8011-5c84a713d05a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.52998ms May 12 11:40:28.787: INFO: Pod "pod-3527ff04-66ca-40a2-8011-5c84a713d05a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033728993s May 12 11:40:30.791: INFO: Pod "pod-3527ff04-66ca-40a2-8011-5c84a713d05a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037557055s May 12 11:40:32.795: INFO: Pod "pod-3527ff04-66ca-40a2-8011-5c84a713d05a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04215584s STEP: Saw pod success May 12 11:40:32.795: INFO: Pod "pod-3527ff04-66ca-40a2-8011-5c84a713d05a" satisfied condition "Succeeded or Failed" May 12 11:40:32.798: INFO: Trying to get logs from node latest-worker2 pod pod-3527ff04-66ca-40a2-8011-5c84a713d05a container test-container: STEP: delete the pod May 12 11:40:32.901: INFO: Waiting for pod pod-3527ff04-66ca-40a2-8011-5c84a713d05a to disappear May 12 11:40:32.917: INFO: Pod pod-3527ff04-66ca-40a2-8011-5c84a713d05a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:40:32.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8240" for this suite. • [SLOW TEST:6.534 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":191,"skipped":3298,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:40:32.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:40:33.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2965" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":192,"skipped":3300,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:40:33.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 11:40:33.326: INFO: Waiting up to 5m0s for pod "downwardapi-volume-adf24e38-5620-406b-8076-4b1409139a7b" in namespace "projected-4424" to be "Succeeded or Failed" May 12 11:40:33.344: INFO: Pod "downwardapi-volume-adf24e38-5620-406b-8076-4b1409139a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.307602ms May 12 11:40:35.713: INFO: Pod "downwardapi-volume-adf24e38-5620-406b-8076-4b1409139a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386312756s May 12 11:40:37.716: INFO: Pod "downwardapi-volume-adf24e38-5620-406b-8076-4b1409139a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.389926072s May 12 11:40:39.809: INFO: Pod "downwardapi-volume-adf24e38-5620-406b-8076-4b1409139a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.482746839s May 12 11:40:41.893: INFO: Pod "downwardapi-volume-adf24e38-5620-406b-8076-4b1409139a7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.566700524s STEP: Saw pod success May 12 11:40:41.893: INFO: Pod "downwardapi-volume-adf24e38-5620-406b-8076-4b1409139a7b" satisfied condition "Succeeded or Failed" May 12 11:40:41.896: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-adf24e38-5620-406b-8076-4b1409139a7b container client-container: STEP: delete the pod May 12 11:40:42.337: INFO: Waiting for pod downwardapi-volume-adf24e38-5620-406b-8076-4b1409139a7b to disappear May 12 11:40:42.540: INFO: Pod downwardapi-volume-adf24e38-5620-406b-8076-4b1409139a7b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:40:42.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4424" for this suite. • [SLOW TEST:9.796 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":193,"skipped":3304,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:40:43.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 11:40:43.969: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:40:44.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2168" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":194,"skipped":3313,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:40:44.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 11:40:45.709: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3624abf1-bb86-4a6d-98b4-1dfb80414617" in namespace "downward-api-8815" to be "Succeeded or Failed" May 12 11:40:45.730: INFO: Pod "downwardapi-volume-3624abf1-bb86-4a6d-98b4-1dfb80414617": Phase="Pending", Reason="", readiness=false. Elapsed: 20.835104ms May 12 11:40:47.733: INFO: Pod "downwardapi-volume-3624abf1-bb86-4a6d-98b4-1dfb80414617": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023536114s May 12 11:40:49.898: INFO: Pod "downwardapi-volume-3624abf1-bb86-4a6d-98b4-1dfb80414617": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18879288s May 12 11:40:52.462: INFO: Pod "downwardapi-volume-3624abf1-bb86-4a6d-98b4-1dfb80414617": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.752607567s STEP: Saw pod success May 12 11:40:52.462: INFO: Pod "downwardapi-volume-3624abf1-bb86-4a6d-98b4-1dfb80414617" satisfied condition "Succeeded or Failed" May 12 11:40:52.465: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-3624abf1-bb86-4a6d-98b4-1dfb80414617 container client-container: STEP: delete the pod May 12 11:40:52.727: INFO: Waiting for pod downwardapi-volume-3624abf1-bb86-4a6d-98b4-1dfb80414617 to disappear May 12 11:40:53.066: INFO: Pod downwardapi-volume-3624abf1-bb86-4a6d-98b4-1dfb80414617 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:40:53.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8815" for this suite. • [SLOW TEST:8.198 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":195,"skipped":3327,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:40:53.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 11:40:53.725: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"3c247e32-130d-48ff-802e-7f1b53093ac8", Controller:(*bool)(0xc003869f02), BlockOwnerDeletion:(*bool)(0xc003869f03)}} May 12 11:40:53.857: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"44192113-4f60-40ea-bed1-057b23d699f6", Controller:(*bool)(0xc003816372), BlockOwnerDeletion:(*bool)(0xc003816373)}} May 12 11:40:53.907: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"74dcdf5c-7b0a-4d94-b8bd-14f9636d539f", Controller:(*bool)(0xc0037ea06a), BlockOwnerDeletion:(*bool)(0xc0037ea06b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:40:59.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3190" for this suite. • [SLOW TEST:5.936 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":196,"skipped":3333,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:40:59.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 11:42:59.272: INFO: Deleting pod "var-expansion-de888016-5460-4c3e-bd50-773301632e34" in namespace "var-expansion-69" May 12 11:42:59.277: INFO: Wait up to 5m0s for pod "var-expansion-de888016-5460-4c3e-bd50-773301632e34" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:43:03.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-69" for this suite. • [SLOW TEST:124.229 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":197,"skipped":3334,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:43:03.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 11:43:11.561: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:43:11.581: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:43:13.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:43:13.691: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:43:15.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:43:15.606: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:43:17.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:43:17.585: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:43:19.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:43:19.586: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:43:21.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:43:21.588: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:43:23.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:43:23.586: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:43:25.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:43:25.646: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:43:25.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1037" for this suite. • [SLOW TEST:22.562 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":198,"skipped":3338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:43:25.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 11:43:26.093: INFO: Waiting up to 5m0s for pod "pod-570ef8dd-c146-492d-a7bb-7cc44e27ec29" in namespace "emptydir-6811" to be "Succeeded or Failed" May 12 11:43:26.193: INFO: Pod "pod-570ef8dd-c146-492d-a7bb-7cc44e27ec29": Phase="Pending", Reason="", readiness=false. Elapsed: 99.877231ms May 12 11:43:28.217: INFO: Pod "pod-570ef8dd-c146-492d-a7bb-7cc44e27ec29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123757568s May 12 11:43:30.240: INFO: Pod "pod-570ef8dd-c146-492d-a7bb-7cc44e27ec29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14627824s May 12 11:43:32.610: INFO: Pod "pod-570ef8dd-c146-492d-a7bb-7cc44e27ec29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.516291368s STEP: Saw pod success May 12 11:43:32.610: INFO: Pod "pod-570ef8dd-c146-492d-a7bb-7cc44e27ec29" satisfied condition "Succeeded or Failed" May 12 11:43:32.613: INFO: Trying to get logs from node latest-worker pod pod-570ef8dd-c146-492d-a7bb-7cc44e27ec29 container test-container: STEP: delete the pod May 12 11:43:32.918: INFO: Waiting for pod pod-570ef8dd-c146-492d-a7bb-7cc44e27ec29 to disappear May 12 11:43:32.976: INFO: Pod pod-570ef8dd-c146-492d-a7bb-7cc44e27ec29 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:43:32.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6811" for this suite. • [SLOW TEST:7.365 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":199,"skipped":3362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:43:33.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 11:43:35.129: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 11:43:37.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880615, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880615, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880615, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880614, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:43:39.854: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880615, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880615, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880615, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880614, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:43:41.484: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880615, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880615, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880615, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880614, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 11:43:44.565: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 11:43:44.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8559-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:43:45.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6328" for this suite. STEP: Destroying namespace "webhook-6328-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.362 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":200,"skipped":3439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:43:46.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:43:58.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4115" for this suite. • [SLOW TEST:12.397 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":201,"skipped":3466,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:43:59.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7968.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7968.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 11:44:09.341: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:09.344: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:09.347: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:09.350: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:09.360: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:09.363: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:09.366: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:09.368: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:09.375: INFO: Lookups using dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local] May 12 11:44:14.688: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:14.841: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:14.844: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:14.861: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:14.870: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:14.873: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:14.876: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:14.878: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:14.883: INFO: Lookups using dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local] May 12 11:44:19.380: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:19.444: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:19.448: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:19.568: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:19.595: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:19.598: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:19.599: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:19.602: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:19.606: INFO: Lookups using dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local] May 12 11:44:24.482: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:24.486: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:24.488: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:24.491: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:24.544: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:24.549: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:25.494: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:25.499: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:25.919: INFO: Lookups using dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local] May 12 11:44:29.518: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:29.595: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:29.598: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:29.600: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:29.715: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:29.718: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:29.720: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:29.722: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:31.183: INFO: Lookups using dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local] May 12 11:44:34.392: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:34.396: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:34.400: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:34.403: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:34.412: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:34.415: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:34.419: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:34.422: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local from pod dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58: the server could not find the requested resource (get pods dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58) May 12 11:44:34.428: INFO: Lookups using dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7968.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7968.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7968.svc.cluster.local jessie_udp@dns-test-service-2.dns-7968.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7968.svc.cluster.local] May 12 11:44:39.436: INFO: DNS probes using dns-7968/dns-test-696e2512-a86b-468a-8dba-bb34ad33eb58 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:44:40.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7968" for this suite. • [SLOW TEST:41.528 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":202,"skipped":3483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:44:40.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-72cb50f7-0943-41b7-9259-ff903f40d804 STEP: Creating a pod to test consume secrets May 12 11:44:41.131: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-42183042-f97f-4dbf-a811-54405aa5770b" in namespace "projected-8087" to be "Succeeded or Failed" May 12 11:44:41.303: INFO: Pod "pod-projected-secrets-42183042-f97f-4dbf-a811-54405aa5770b": Phase="Pending", Reason="", readiness=false. Elapsed: 172.055042ms May 12 11:44:43.935: INFO: Pod "pod-projected-secrets-42183042-f97f-4dbf-a811-54405aa5770b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.803641959s May 12 11:44:45.957: INFO: Pod "pod-projected-secrets-42183042-f97f-4dbf-a811-54405aa5770b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.825402045s May 12 11:44:48.638: INFO: Pod "pod-projected-secrets-42183042-f97f-4dbf-a811-54405aa5770b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.507101852s May 12 11:44:50.789: INFO: Pod "pod-projected-secrets-42183042-f97f-4dbf-a811-54405aa5770b": Phase="Running", Reason="", readiness=true. Elapsed: 9.657908969s May 12 11:44:53.315: INFO: Pod "pod-projected-secrets-42183042-f97f-4dbf-a811-54405aa5770b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.183568857s STEP: Saw pod success May 12 11:44:53.315: INFO: Pod "pod-projected-secrets-42183042-f97f-4dbf-a811-54405aa5770b" satisfied condition "Succeeded or Failed" May 12 11:44:53.566: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-42183042-f97f-4dbf-a811-54405aa5770b container projected-secret-volume-test: STEP: delete the pod May 12 11:44:54.202: INFO: Waiting for pod pod-projected-secrets-42183042-f97f-4dbf-a811-54405aa5770b to disappear May 12 11:44:54.210: INFO: Pod pod-projected-secrets-42183042-f97f-4dbf-a811-54405aa5770b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:44:54.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8087" for this suite. • [SLOW TEST:13.691 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":203,"skipped":3518,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:44:54.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 11:44:54.459: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 12 11:44:57.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8835 create -f -' May 12 11:45:05.462: INFO: stderr: "" May 12 11:45:05.462: INFO: stdout: "e2e-test-crd-publish-openapi-6118-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 12 11:45:05.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8835 delete e2e-test-crd-publish-openapi-6118-crds test-foo' May 12 11:45:05.604: INFO: stderr: "" May 12 11:45:05.604: INFO: stdout: "e2e-test-crd-publish-openapi-6118-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 12 11:45:05.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8835 apply -f -' May 12 11:45:05.938: INFO: stderr: "" May 12 11:45:05.938: INFO: stdout: "e2e-test-crd-publish-openapi-6118-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 12 11:45:05.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8835 delete e2e-test-crd-publish-openapi-6118-crds test-foo' May 12 11:45:06.388: INFO: stderr: "" May 12 11:45:06.388: INFO: stdout: "e2e-test-crd-publish-openapi-6118-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 12 11:45:06.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8835 create -f -' May 12 11:45:06.612: INFO: rc: 1 May 12 11:45:06.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8835 apply -f -' May 12 11:45:06.849: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 12 11:45:06.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8835 create -f -' May 12 11:45:07.045: INFO: rc: 1 May 12 11:45:07.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8835 apply -f -' May 12 11:45:07.258: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 12 11:45:07.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6118-crds' May 12 11:45:07.573: INFO: stderr: "" May 12 11:45:07.573: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6118-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 12 11:45:07.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6118-crds.metadata' May 12 11:45:08.113: INFO: stderr: "" May 12 11:45:08.114: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6118-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 12 11:45:08.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6118-crds.spec' May 12 11:45:08.956: INFO: stderr: "" May 12 11:45:08.956: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6118-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 12 11:45:08.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6118-crds.spec.bars' May 12 11:45:09.716: INFO: stderr: "" May 12 11:45:09.716: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6118-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 12 11:45:09.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6118-crds.spec.bars2' May 12 11:45:10.218: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:45:13.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8835" for this suite. • [SLOW TEST:19.093 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":204,"skipped":3521,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:45:13.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 12 11:45:14.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3545' May 12 11:45:15.754: INFO: stderr: "" May 12 11:45:15.754: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 12 11:45:16.762: INFO: Selector matched 1 pods for map[app:agnhost] May 12 11:45:16.762: INFO: Found 0 / 1 May 12 11:45:17.860: INFO: Selector matched 1 pods for map[app:agnhost] May 12 11:45:17.860: INFO: Found 0 / 1 May 12 11:45:19.043: INFO: Selector matched 1 pods for map[app:agnhost] May 12 11:45:19.043: INFO: Found 0 / 1 May 12 11:45:19.906: INFO: Selector matched 1 pods for map[app:agnhost] May 12 11:45:19.906: INFO: Found 0 / 1 May 12 11:45:20.758: INFO: Selector matched 1 pods for map[app:agnhost] May 12 11:45:20.758: INFO: Found 0 / 1 May 12 11:45:21.763: INFO: Selector matched 1 pods for map[app:agnhost] May 12 11:45:21.763: INFO: Found 1 / 1 May 12 11:45:21.763: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 12 11:45:21.767: INFO: Selector matched 1 pods for map[app:agnhost] May 12 11:45:21.767: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 11:45:21.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-lbbz9 --namespace=kubectl-3545 -p {"metadata":{"annotations":{"x":"y"}}}' May 12 11:45:21.873: INFO: stderr: "" May 12 11:45:21.873: INFO: stdout: "pod/agnhost-master-lbbz9 patched\n" STEP: checking annotations May 12 11:45:21.878: INFO: Selector matched 1 pods for map[app:agnhost] May 12 11:45:21.878: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:45:21.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3545" for this suite. • [SLOW TEST:8.571 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1468 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":205,"skipped":3541,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:45:21.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:45:22.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-371" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":206,"skipped":3546,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:45:22.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0512 11:45:24.334017 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 11:45:24.334: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:45:24.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3822" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":207,"skipped":3553,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:45:24.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-35.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-35.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-35.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-35.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-35.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-35.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-35.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-35.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-35.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-35.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-35.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 174.224.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.224.174_udp@PTR;check="$$(dig +tcp +noall +answer +search 174.224.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.224.174_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-35.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-35.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-35.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-35.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-35.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-35.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-35.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-35.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-35.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-35.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-35.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 174.224.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.224.174_udp@PTR;check="$$(dig +tcp +noall +answer +search 174.224.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.224.174_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 11:45:34.964: INFO: Unable to read wheezy_udp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:34.967: INFO: Unable to read wheezy_tcp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:34.969: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:34.971: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:35.000: INFO: Unable to read jessie_udp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:35.003: INFO: Unable to read jessie_tcp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:35.005: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:35.007: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:35.021: INFO: Lookups using dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb failed for: [wheezy_udp@dns-test-service.dns-35.svc.cluster.local wheezy_tcp@dns-test-service.dns-35.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local jessie_udp@dns-test-service.dns-35.svc.cluster.local jessie_tcp@dns-test-service.dns-35.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local] May 12 11:45:40.733: INFO: Unable to read wheezy_udp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:41.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:41.538: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:41.591: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:41.993: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: Get https://172.30.12.66:32773/api/v1/namespaces/dns-35/pods/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb/proxy/results/wheezy_tcp@_http._tcp.test-service-2.dns-35.svc.cluster.local: stream error: stream ID 11923; INTERNAL_ERROR May 12 11:45:42.620: INFO: Unable to read jessie_udp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:42.800: INFO: Unable to read jessie_tcp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:42.817: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:42.867: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:43.037: INFO: Lookups using dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb failed for: [wheezy_udp@dns-test-service.dns-35.svc.cluster.local wheezy_tcp@dns-test-service.dns-35.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-35.svc.cluster.local jessie_udp@dns-test-service.dns-35.svc.cluster.local jessie_tcp@dns-test-service.dns-35.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local] May 12 11:45:45.027: INFO: Unable to read wheezy_udp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:45.030: INFO: Unable to read wheezy_tcp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:45.032: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:45.035: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:45.071: INFO: Unable to read jessie_udp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:45.074: INFO: Unable to read jessie_tcp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:45.076: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:45.078: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:45.093: INFO: Lookups using dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb failed for: [wheezy_udp@dns-test-service.dns-35.svc.cluster.local wheezy_tcp@dns-test-service.dns-35.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local jessie_udp@dns-test-service.dns-35.svc.cluster.local jessie_tcp@dns-test-service.dns-35.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local] May 12 11:45:50.026: INFO: Unable to read wheezy_udp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:50.029: INFO: Unable to read wheezy_tcp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:50.032: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:50.035: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:50.054: INFO: Unable to read jessie_udp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:50.058: INFO: Unable to read jessie_tcp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:50.061: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:50.064: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:50.080: INFO: Lookups using dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb failed for: [wheezy_udp@dns-test-service.dns-35.svc.cluster.local wheezy_tcp@dns-test-service.dns-35.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local jessie_udp@dns-test-service.dns-35.svc.cluster.local jessie_tcp@dns-test-service.dns-35.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local] May 12 11:45:55.024: INFO: Unable to read wheezy_udp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:55.027: INFO: Unable to read wheezy_tcp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:55.029: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:55.032: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:55.048: INFO: Unable to read jessie_udp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:55.051: INFO: Unable to read jessie_tcp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:55.053: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:55.056: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:45:55.070: INFO: Lookups using dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb failed for: [wheezy_udp@dns-test-service.dns-35.svc.cluster.local wheezy_tcp@dns-test-service.dns-35.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local jessie_udp@dns-test-service.dns-35.svc.cluster.local jessie_tcp@dns-test-service.dns-35.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local] May 12 11:46:00.126: INFO: Unable to read wheezy_udp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:46:00.130: INFO: Unable to read wheezy_tcp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:46:00.370: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:46:00.408: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:46:00.434: INFO: Unable to read jessie_udp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:46:00.436: INFO: Unable to read jessie_tcp@dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:46:00.441: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:46:00.443: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local from pod dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb: the server could not find the requested resource (get pods dns-test-11415184-a3ae-402b-9541-3c9b8803aefb) May 12 11:46:00.457: INFO: Lookups using dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb failed for: [wheezy_udp@dns-test-service.dns-35.svc.cluster.local wheezy_tcp@dns-test-service.dns-35.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local jessie_udp@dns-test-service.dns-35.svc.cluster.local jessie_tcp@dns-test-service.dns-35.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-35.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-35.svc.cluster.local] May 12 11:46:06.743: INFO: DNS probes using dns-35/dns-test-11415184-a3ae-402b-9541-3c9b8803aefb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:46:08.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-35" for this suite. • [SLOW TEST:44.112 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":208,"skipped":3570,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:46:08.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 11:46:11.231: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 11:46:13.400: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880771, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880771, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880771, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880770, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:46:15.825: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880771, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880771, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880771, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880770, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:46:17.771: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880771, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880771, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880771, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880770, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 11:46:20.573: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 12 11:46:28.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-1632 to-be-attached-pod -i -c=container1' May 12 11:46:28.760: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:46:28.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1632" for this suite. STEP: Destroying namespace "webhook-1632-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.284 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":209,"skipped":3577,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:46:29.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:46:31.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5709" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":210,"skipped":3597,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:46:31.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 12 11:46:31.835: INFO: Waiting up to 1m0s for all nodes to be ready May 12 11:47:31.858: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:47:31.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 12 11:47:38.095: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 11:48:07.531: INFO: pods created so far: [1 1 1] May 12 11:48:07.531: INFO: length of pods created so far: 3 May 12 11:48:25.759: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:48:32.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8286" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:48:33.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8409" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:122.930 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":211,"skipped":3625,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:48:34.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-9709 STEP: creating replication controller nodeport-test in namespace services-9709 I0512 11:48:34.299405 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9709, replica count: 2 I0512 11:48:37.349738 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 11:48:40.349935 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 11:48:43.350172 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 11:48:43.350: INFO: Creating new exec pod May 12 11:48:50.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9709 execpodnzcx9 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 12 11:48:50.691: INFO: stderr: "I0512 11:48:50.600498 3996 log.go:172] (0xc000b1bc30) (0xc000c4e5a0) Create stream\nI0512 11:48:50.600561 3996 log.go:172] (0xc000b1bc30) (0xc000c4e5a0) Stream added, broadcasting: 1\nI0512 11:48:50.605573 3996 log.go:172] (0xc000b1bc30) Reply frame received for 1\nI0512 11:48:50.605614 3996 log.go:172] (0xc000b1bc30) (0xc0006780a0) Create stream\nI0512 11:48:50.605624 3996 log.go:172] (0xc000b1bc30) (0xc0006780a0) Stream added, broadcasting: 3\nI0512 11:48:50.606425 3996 log.go:172] (0xc000b1bc30) Reply frame received for 3\nI0512 11:48:50.606471 3996 log.go:172] (0xc000b1bc30) (0xc00065cbe0) Create stream\nI0512 11:48:50.606481 3996 log.go:172] (0xc000b1bc30) (0xc00065cbe0) Stream added, broadcasting: 5\nI0512 11:48:50.607191 3996 log.go:172] (0xc000b1bc30) Reply frame received for 5\nI0512 11:48:50.684051 3996 log.go:172] (0xc000b1bc30) Data frame received for 3\nI0512 11:48:50.684092 3996 log.go:172] (0xc0006780a0) (3) Data frame handling\nI0512 11:48:50.684114 3996 log.go:172] (0xc000b1bc30) Data frame received for 5\nI0512 11:48:50.684123 3996 log.go:172] (0xc00065cbe0) (5) Data frame handling\nI0512 11:48:50.684139 3996 log.go:172] (0xc00065cbe0) (5) Data frame sent\nI0512 11:48:50.684155 3996 log.go:172] (0xc000b1bc30) Data frame received for 5\nI0512 11:48:50.684166 3996 log.go:172] (0xc00065cbe0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0512 11:48:50.685961 3996 log.go:172] (0xc000b1bc30) Data frame received for 1\nI0512 11:48:50.685986 3996 log.go:172] (0xc000c4e5a0) (1) Data frame handling\nI0512 11:48:50.686001 3996 log.go:172] (0xc000c4e5a0) (1) Data frame sent\nI0512 11:48:50.686016 3996 log.go:172] (0xc000b1bc30) (0xc000c4e5a0) Stream removed, broadcasting: 1\nI0512 11:48:50.686037 3996 log.go:172] (0xc000b1bc30) Go away received\nI0512 11:48:50.686382 3996 log.go:172] (0xc000b1bc30) (0xc000c4e5a0) Stream removed, broadcasting: 1\nI0512 11:48:50.686397 3996 log.go:172] (0xc000b1bc30) (0xc0006780a0) Stream removed, broadcasting: 3\nI0512 11:48:50.686404 3996 log.go:172] (0xc000b1bc30) (0xc00065cbe0) Stream removed, broadcasting: 5\n" May 12 11:48:50.691: INFO: stdout: "" May 12 11:48:50.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9709 execpodnzcx9 -- /bin/sh -x -c nc -zv -t -w 2 10.107.97.70 80' May 12 11:48:50.943: INFO: stderr: "I0512 11:48:50.866295 4017 log.go:172] (0xc0009bcfd0) (0xc000a963c0) Create stream\nI0512 11:48:50.866355 4017 log.go:172] (0xc0009bcfd0) (0xc000a963c0) Stream added, broadcasting: 1\nI0512 11:48:50.870874 4017 log.go:172] (0xc0009bcfd0) Reply frame received for 1\nI0512 11:48:50.870919 4017 log.go:172] (0xc0009bcfd0) (0xc00082e640) Create stream\nI0512 11:48:50.870937 4017 log.go:172] (0xc0009bcfd0) (0xc00082e640) Stream added, broadcasting: 3\nI0512 11:48:50.871786 4017 log.go:172] (0xc0009bcfd0) Reply frame received for 3\nI0512 11:48:50.871827 4017 log.go:172] (0xc0009bcfd0) (0xc000820320) Create stream\nI0512 11:48:50.871842 4017 log.go:172] (0xc0009bcfd0) (0xc000820320) Stream added, broadcasting: 5\nI0512 11:48:50.872541 4017 log.go:172] (0xc0009bcfd0) Reply frame received for 5\nI0512 11:48:50.936349 4017 log.go:172] (0xc0009bcfd0) Data frame received for 5\nI0512 11:48:50.936377 4017 log.go:172] (0xc000820320) (5) Data frame handling\nI0512 11:48:50.936389 4017 log.go:172] (0xc000820320) (5) Data frame sent\nI0512 11:48:50.936401 4017 log.go:172] (0xc0009bcfd0) Data frame received for 5\nI0512 11:48:50.936408 4017 log.go:172] (0xc000820320) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.97.70 80\nConnection to 10.107.97.70 80 port [tcp/http] succeeded!\nI0512 11:48:50.936426 4017 log.go:172] (0xc0009bcfd0) Data frame received for 3\nI0512 11:48:50.936431 4017 log.go:172] (0xc00082e640) (3) Data frame handling\nI0512 11:48:50.938171 4017 log.go:172] (0xc0009bcfd0) Data frame received for 1\nI0512 11:48:50.938186 4017 log.go:172] (0xc000a963c0) (1) Data frame handling\nI0512 11:48:50.938197 4017 log.go:172] (0xc000a963c0) (1) Data frame sent\nI0512 11:48:50.938207 4017 log.go:172] (0xc0009bcfd0) (0xc000a963c0) Stream removed, broadcasting: 1\nI0512 11:48:50.938332 4017 log.go:172] (0xc0009bcfd0) Go away received\nI0512 11:48:50.938458 4017 log.go:172] (0xc0009bcfd0) (0xc000a963c0) Stream removed, broadcasting: 1\nI0512 11:48:50.938476 4017 log.go:172] (0xc0009bcfd0) (0xc00082e640) Stream removed, broadcasting: 3\nI0512 11:48:50.938485 4017 log.go:172] (0xc0009bcfd0) (0xc000820320) Stream removed, broadcasting: 5\n" May 12 11:48:50.943: INFO: stdout: "" May 12 11:48:50.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9709 execpodnzcx9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32616' May 12 11:48:51.207: INFO: stderr: "I0512 11:48:51.128924 4039 log.go:172] (0xc0009a2e70) (0xc0003201e0) Create stream\nI0512 11:48:51.128987 4039 log.go:172] (0xc0009a2e70) (0xc0003201e0) Stream added, broadcasting: 1\nI0512 11:48:51.131820 4039 log.go:172] (0xc0009a2e70) Reply frame received for 1\nI0512 11:48:51.131871 4039 log.go:172] (0xc0009a2e70) (0xc0001f6000) Create stream\nI0512 11:48:51.131891 4039 log.go:172] (0xc0009a2e70) (0xc0001f6000) Stream added, broadcasting: 3\nI0512 11:48:51.132654 4039 log.go:172] (0xc0009a2e70) Reply frame received for 3\nI0512 11:48:51.132692 4039 log.go:172] (0xc0009a2e70) (0xc0003ac460) Create stream\nI0512 11:48:51.132711 4039 log.go:172] (0xc0009a2e70) (0xc0003ac460) Stream added, broadcasting: 5\nI0512 11:48:51.133925 4039 log.go:172] (0xc0009a2e70) Reply frame received for 5\nI0512 11:48:51.202282 4039 log.go:172] (0xc0009a2e70) Data frame received for 3\nI0512 11:48:51.202322 4039 log.go:172] (0xc0001f6000) (3) Data frame handling\nI0512 11:48:51.202368 4039 log.go:172] (0xc0009a2e70) Data frame received for 5\nI0512 11:48:51.202390 4039 log.go:172] (0xc0003ac460) (5) Data frame handling\nI0512 11:48:51.202411 4039 log.go:172] (0xc0003ac460) (5) Data frame sent\nI0512 11:48:51.202432 4039 log.go:172] (0xc0009a2e70) Data frame received for 5\nI0512 11:48:51.202441 4039 log.go:172] (0xc0003ac460) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32616\nConnection to 172.17.0.13 32616 port [tcp/32616] succeeded!\nI0512 11:48:51.203619 4039 log.go:172] (0xc0009a2e70) Data frame received for 1\nI0512 11:48:51.203638 4039 log.go:172] (0xc0003201e0) (1) Data frame handling\nI0512 11:48:51.203647 4039 log.go:172] (0xc0003201e0) (1) Data frame sent\nI0512 11:48:51.203656 4039 log.go:172] (0xc0009a2e70) (0xc0003201e0) Stream removed, broadcasting: 1\nI0512 11:48:51.203677 4039 log.go:172] (0xc0009a2e70) Go away received\nI0512 11:48:51.203922 4039 log.go:172] (0xc0009a2e70) (0xc0003201e0) Stream removed, broadcasting: 1\nI0512 11:48:51.203933 4039 log.go:172] (0xc0009a2e70) (0xc0001f6000) Stream removed, broadcasting: 3\nI0512 11:48:51.203939 4039 log.go:172] (0xc0009a2e70) (0xc0003ac460) Stream removed, broadcasting: 5\n" May 12 11:48:51.207: INFO: stdout: "" May 12 11:48:51.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9709 execpodnzcx9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32616' May 12 11:48:51.408: INFO: stderr: "I0512 11:48:51.334557 4056 log.go:172] (0xc0009b9340) (0xc000716000) Create stream\nI0512 11:48:51.334612 4056 log.go:172] (0xc0009b9340) (0xc000716000) Stream added, broadcasting: 1\nI0512 11:48:51.337832 4056 log.go:172] (0xc0009b9340) Reply frame received for 1\nI0512 11:48:51.337875 4056 log.go:172] (0xc0009b9340) (0xc00038b860) Create stream\nI0512 11:48:51.337896 4056 log.go:172] (0xc0009b9340) (0xc00038b860) Stream added, broadcasting: 3\nI0512 11:48:51.338890 4056 log.go:172] (0xc0009b9340) Reply frame received for 3\nI0512 11:48:51.338919 4056 log.go:172] (0xc0009b9340) (0xc00038bae0) Create stream\nI0512 11:48:51.338932 4056 log.go:172] (0xc0009b9340) (0xc00038bae0) Stream added, broadcasting: 5\nI0512 11:48:51.340141 4056 log.go:172] (0xc0009b9340) Reply frame received for 5\nI0512 11:48:51.402160 4056 log.go:172] (0xc0009b9340) Data frame received for 5\nI0512 11:48:51.402216 4056 log.go:172] (0xc00038bae0) (5) Data frame handling\nI0512 11:48:51.402230 4056 log.go:172] (0xc00038bae0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 32616\nConnection to 172.17.0.12 32616 port [tcp/32616] succeeded!\nI0512 11:48:51.402333 4056 log.go:172] (0xc0009b9340) Data frame received for 3\nI0512 11:48:51.402361 4056 log.go:172] (0xc00038b860) (3) Data frame handling\nI0512 11:48:51.402400 4056 log.go:172] (0xc0009b9340) Data frame received for 5\nI0512 11:48:51.402418 4056 log.go:172] (0xc00038bae0) (5) Data frame handling\nI0512 11:48:51.403478 4056 log.go:172] (0xc0009b9340) Data frame received for 1\nI0512 11:48:51.403516 4056 log.go:172] (0xc000716000) (1) Data frame handling\nI0512 11:48:51.403543 4056 log.go:172] (0xc000716000) (1) Data frame sent\nI0512 11:48:51.403562 4056 log.go:172] (0xc0009b9340) (0xc000716000) Stream removed, broadcasting: 1\nI0512 11:48:51.403585 4056 log.go:172] (0xc0009b9340) Go away received\nI0512 11:48:51.404062 4056 log.go:172] (0xc0009b9340) (0xc000716000) Stream removed, broadcasting: 1\nI0512 11:48:51.404080 4056 log.go:172] (0xc0009b9340) (0xc00038b860) Stream removed, broadcasting: 3\nI0512 11:48:51.404090 4056 log.go:172] (0xc0009b9340) (0xc00038bae0) Stream removed, broadcasting: 5\n" May 12 11:48:51.408: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:48:51.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9709" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:17.336 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":212,"skipped":3644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:48:51.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 12 11:48:51.656: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix675783205/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:48:51.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6748" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":213,"skipped":3698,"failed":0} ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:48:51.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 12 11:50:53.038: INFO: Successfully updated pod "var-expansion-1b2aff10-d550-4302-bb75-4d8130cf8c8e" STEP: waiting for pod running STEP: deleting the pod gracefully May 12 11:50:57.732: INFO: Deleting pod "var-expansion-1b2aff10-d550-4302-bb75-4d8130cf8c8e" in namespace "var-expansion-8433" May 12 11:50:57.976: INFO: Wait up to 5m0s for pod "var-expansion-1b2aff10-d550-4302-bb75-4d8130cf8c8e" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:51:33.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8433" for this suite. • [SLOW TEST:162.182 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":214,"skipped":3698,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:51:33.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-4a62a77c-4638-4527-9a24-10e7e79817bc STEP: Creating a pod to test consume secrets May 12 11:51:34.938: INFO: Waiting up to 5m0s for pod "pod-secrets-bbf4d5ae-a78d-4375-9b3d-aac56f1be3a6" in namespace "secrets-368" to be "Succeeded or Failed" May 12 11:51:35.009: INFO: Pod "pod-secrets-bbf4d5ae-a78d-4375-9b3d-aac56f1be3a6": Phase="Pending", Reason="", readiness=false. Elapsed: 70.257547ms May 12 11:51:37.238: INFO: Pod "pod-secrets-bbf4d5ae-a78d-4375-9b3d-aac56f1be3a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299306291s May 12 11:51:39.240: INFO: Pod "pod-secrets-bbf4d5ae-a78d-4375-9b3d-aac56f1be3a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301888593s May 12 11:51:41.743: INFO: Pod "pod-secrets-bbf4d5ae-a78d-4375-9b3d-aac56f1be3a6": Phase="Running", Reason="", readiness=true. Elapsed: 6.804413427s May 12 11:51:44.264: INFO: Pod "pod-secrets-bbf4d5ae-a78d-4375-9b3d-aac56f1be3a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.325315486s STEP: Saw pod success May 12 11:51:44.264: INFO: Pod "pod-secrets-bbf4d5ae-a78d-4375-9b3d-aac56f1be3a6" satisfied condition "Succeeded or Failed" May 12 11:51:44.327: INFO: Trying to get logs from node latest-worker pod pod-secrets-bbf4d5ae-a78d-4375-9b3d-aac56f1be3a6 container secret-volume-test: STEP: delete the pod May 12 11:51:44.773: INFO: Waiting for pod pod-secrets-bbf4d5ae-a78d-4375-9b3d-aac56f1be3a6 to disappear May 12 11:51:45.083: INFO: Pod pod-secrets-bbf4d5ae-a78d-4375-9b3d-aac56f1be3a6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:51:45.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-368" for this suite. • [SLOW TEST:11.348 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":215,"skipped":3724,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:51:45.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 11:51:45.522: INFO: Creating deployment "webserver-deployment" May 12 11:51:45.951: INFO: Waiting for observed generation 1 May 12 11:51:48.844: INFO: Waiting for all required pods to come up May 12 11:51:49.454: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 12 11:52:05.965: INFO: Waiting for deployment "webserver-deployment" to complete May 12 11:52:05.971: INFO: Updating deployment "webserver-deployment" with a non-existent image May 12 11:52:05.977: INFO: Updating deployment webserver-deployment May 12 11:52:05.977: INFO: Waiting for observed generation 2 May 12 11:52:08.724: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 12 11:52:11.258: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 12 11:52:11.260: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 12 11:52:11.927: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 12 11:52:11.927: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 12 11:52:12.306: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 12 11:52:13.502: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 12 11:52:13.502: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 12 11:52:14.216: INFO: Updating deployment webserver-deployment May 12 11:52:14.216: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 12 11:52:14.407: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 12 11:52:17.472: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 12 11:52:19.164: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7260 /apis/apps/v1/namespaces/deployment-7260/deployments/webserver-deployment 81301d79-7e29-465f-a2b9-880cff05afa3 3802657 3 2020-05-12 11:51:45 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-12 11:52:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-12 11:52:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f82428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-12 11:52:14 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-12 11:52:15 +0000 UTC,LastTransitionTime:2020-05-12 11:51:46 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 12 11:52:20.738: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-7260 /apis/apps/v1/namespaces/deployment-7260/replicasets/webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 3802643 3 2020-05-12 11:52:05 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 81301d79-7e29-465f-a2b9-880cff05afa3 0xc005f828b7 0xc005f828b8}] [] [{kube-controller-manager Update apps/v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81301d79-7e29-465f-a2b9-880cff05afa3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f82938 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 11:52:20.738: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 12 11:52:20.738: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-7260 /apis/apps/v1/namespaces/deployment-7260/replicasets/webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 3802646 3 2020-05-12 11:51:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 81301d79-7e29-465f-a2b9-880cff05afa3 0xc005f82997 0xc005f82998}] [] [{kube-controller-manager Update apps/v1 2020-05-12 11:52:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81301d79-7e29-465f-a2b9-880cff05afa3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f82a08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 12 11:52:21.676: INFO: Pod "webserver-deployment-6676bcd6d4-2snzj" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2snzj webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-2snzj ac1dd5e1-ddd4-4126-8fa2-347d71be2685 3802639 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc005f82f37 0xc005f82f38}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.677: INFO: Pod "webserver-deployment-6676bcd6d4-4ngfc" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4ngfc webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-4ngfc 10c6e725-5f6e-47cc-b397-954110d44857 3802626 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc005f83077 0xc005f83078}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.677: INFO: Pod "webserver-deployment-6676bcd6d4-7f4mf" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7f4mf webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-7f4mf 4bb003ca-6975-4df6-baae-9e66bf003b39 3802572 0 2020-05-12 11:52:06 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc005f831b7 0xc005f831b8}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.83\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.83,StartTime:2020-05-12 11:52:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.678: INFO: Pod "webserver-deployment-6676bcd6d4-9sl8j" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9sl8j webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-9sl8j 180aeae0-8ab0-46b0-8e84-f3c2762c7889 3802571 0 2020-05-12 11:52:08 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc005f83397 0xc005f83398}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.181\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.181,StartTime:2020-05-12 11:52:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.181,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.678: INFO: Pod "webserver-deployment-6676bcd6d4-ks55j" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-ks55j webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-ks55j 23dbf778-852a-454d-b27b-bc20b08d1de6 3802622 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc005f83577 0xc005f83578}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.678: INFO: Pod "webserver-deployment-6676bcd6d4-ldrpc" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-ldrpc webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-ldrpc aa59ab2f-e30f-4287-a484-a2c1b04dc029 3802556 0 2020-05-12 11:52:06 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc005f836b7 0xc005f836b8}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.180\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.180,StartTime:2020-05-12 11:52:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.180,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.678: INFO: Pod "webserver-deployment-6676bcd6d4-m8zwd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-m8zwd webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-m8zwd a69e5a1f-7fdc-4ecc-8061-b2662f69cdbd 3802683 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc005f83897 0xc005f83898}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-12 11:52:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.679: INFO: Pod "webserver-deployment-6676bcd6d4-mm66v" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mm66v webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-mm66v 4718b3bc-aa92-4623-a583-e560b0517324 3802555 0 2020-05-12 11:52:06 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc005f83a47 0xc005f83a48}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.82\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.82,StartTime:2020-05-12 11:52:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.679: INFO: Pod "webserver-deployment-6676bcd6d4-qww27" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qww27 webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-qww27 178ea2a8-469a-4864-836c-51abffb36332 3802662 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc005f83c27 0xc005f83c28}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-12 11:52:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.679: INFO: Pod "webserver-deployment-6676bcd6d4-rw89x" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rw89x webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-rw89x 2dcb82f9-dac3-46b5-a84c-93f2ec9396a6 3802651 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc005f83dd7 0xc005f83dd8}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-12 11:52:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.679: INFO: Pod "webserver-deployment-6676bcd6d4-tq9gk" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tq9gk webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-tq9gk 123acb05-928f-4a46-b8ef-e36ea81e0906 3802623 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc005f83f87 0xc005f83f88}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.679: INFO: Pod "webserver-deployment-6676bcd6d4-z9frj" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-z9frj webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-z9frj 52935320-fe23-4074-8bb2-9c4f6312775d 3802625 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc0055380c7 0xc0055380c8}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.680: INFO: Pod "webserver-deployment-6676bcd6d4-zbtzm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zbtzm webserver-deployment-6676bcd6d4- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-6676bcd6d4-zbtzm 4ffb1e67-28e2-43ac-a336-336211686b19 3802677 0 2020-05-12 11:52:08 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 c7845b94-05e2-46c7-a5a2-1355285d8812 0xc005538207 0xc005538208}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7845b94-05e2-46c7-a5a2-1355285d8812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.182\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.182,StartTime:2020-05-12 11:52:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.182,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.680: INFO: Pod "webserver-deployment-84855cf797-62js8" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-62js8 webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-62js8 96da3640-6ae3-4cc9-924a-45684c935d1a 3802618 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc0055383e7 0xc0055383e8}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.680: INFO: Pod "webserver-deployment-84855cf797-648rq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-648rq webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-648rq 4ce0edb3-9b94-4c7f-9a4b-195c928794db 3802424 0 2020-05-12 11:51:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc005538517 0xc005538518}] [] [{kube-controller-manager Update v1 2020-05-12 11:51:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.175\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.175,StartTime:2020-05-12 11:51:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 11:51:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f4d86f483ae020f04314a55b605e458d2633a6ca0e9e621c810e37a51a038dad,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.175,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.680: INFO: Pod "webserver-deployment-84855cf797-6j96c" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6j96c webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-6j96c 209ab958-6a5e-4244-bc8e-af8794f70d81 3802668 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc0055386c7 0xc0055386c8}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-12 11:52:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.681: INFO: Pod "webserver-deployment-84855cf797-6zhhd" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6zhhd webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-6zhhd fe671f3f-203a-4de5-ad0a-a9db58d37273 3802663 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc005538857 0xc005538858}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-12 11:52:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.681: INFO: Pod "webserver-deployment-84855cf797-7dt7g" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7dt7g webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-7dt7g bde4a036-35d2-4f1f-967d-1919bdf11162 3802672 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc0055389e7 0xc0055389e8}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-12 11:52:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.681: INFO: Pod "webserver-deployment-84855cf797-8d2jr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8d2jr webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-8d2jr d363e9c5-a7d8-490b-90dc-349e8e22c1ac 3802658 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc005538b77 0xc005538b78}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-12 11:52:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.682: INFO: Pod "webserver-deployment-84855cf797-8zbfw" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8zbfw webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-8zbfw 96e4ddb0-5ee1-483c-9c24-e89a5a327cda 3802450 0 2020-05-12 11:51:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc005538d07 0xc005538d08}] [] [{kube-controller-manager Update v1 2020-05-12 11:51:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.79\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.79,StartTime:2020-05-12 11:51:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 11:52:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://049604924ea4889f340e182d6099a0bf91bbc24d1aa3de5e598b6e438d5ce219,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.682: INFO: Pod "webserver-deployment-84855cf797-bsrvn" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bsrvn webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-bsrvn 0d8d3851-a526-4b28-a9cd-cba5c3596363 3802459 0 2020-05-12 11:51:47 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc005538eb7 0xc005538eb8}] [] [{kube-controller-manager Update v1 2020-05-12 11:51:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.81\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.81,StartTime:2020-05-12 11:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 11:52:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e2c48bb8b5ef76992f24d10e9e186218d3a908aa2d2f9f7ab01ffd2669effc3a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.682: INFO: Pod "webserver-deployment-84855cf797-cwssn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cwssn webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-cwssn ae36f86c-9def-49a9-811f-8f80c3fd5e1d 3802617 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc005539067 0xc005539068}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.683: INFO: Pod "webserver-deployment-84855cf797-dmmf9" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dmmf9 webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-dmmf9 15d16377-1d6e-4bee-80bc-c8cbe983303c 3802438 0 2020-05-12 11:51:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc005539197 0xc005539198}] [] [{kube-controller-manager Update v1 2020-05-12 11:51:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.78\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.78,StartTime:2020-05-12 11:51:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 11:52:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1c0466a7becbbbbcd6fccf42d1052a9b19ecec2fc14e367f524f462e1dd1f779,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.683: INFO: Pod "webserver-deployment-84855cf797-drgkl" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-drgkl webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-drgkl 54273caa-6163-473a-8ae3-72b999571d37 3802464 0 2020-05-12 11:51:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc005539347 0xc005539348}] [] [{kube-controller-manager Update v1 2020-05-12 11:51:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.177\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.177,StartTime:2020-05-12 11:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 11:52:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6a191d60419bdf21f77db1691b7b9d218c4f9ec11ffb2c1eba3a630b51cc5ffd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.177,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.683: INFO: Pod "webserver-deployment-84855cf797-f8xcr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-f8xcr webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-f8xcr 77b2dcbb-0653-46f6-8002-b52d3939b534 3802615 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc0055394f7 0xc0055394f8}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.683: INFO: Pod "webserver-deployment-84855cf797-gfkvx" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-gfkvx webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-gfkvx 4ddbc940-3ef3-430a-b52d-9a9a1c6149ac 3802655 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc005539627 0xc005539628}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-12 11:52:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.683: INFO: Pod "webserver-deployment-84855cf797-ll2r5" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ll2r5 webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-ll2r5 579c271f-32ff-429d-9271-6f8e6f8fea01 3802415 0 2020-05-12 11:51:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc0055397b7 0xc0055397b8}] [] [{kube-controller-manager Update v1 2020-05-12 11:51:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:51:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.174\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.174,StartTime:2020-05-12 11:51:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 11:51:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4b33a98e20421aa4c034c385c1d255ddae9338f9742a75a27ae26c3c9427f3f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.684: INFO: Pod "webserver-deployment-84855cf797-lmtll" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lmtll webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-lmtll e7b48a07-f342-4b71-b3fd-6e2e64f02cf2 3802455 0 2020-05-12 11:51:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc005539967 0xc005539968}] [] [{kube-controller-manager Update v1 2020-05-12 11:51:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.80,StartTime:2020-05-12 11:51:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 11:52:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0829354e3bd4d1bbcf438f163eb55a15d8a2342c6e9c82be01b0c6eaed460dc3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.684: INFO: Pod "webserver-deployment-84855cf797-p67xt" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-p67xt webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-p67xt 00a61c8a-ad8b-4061-b76a-21aaaaaae5fa 3802638 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc005539cb7 0xc005539cb8}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-12 11:52:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.684: INFO: Pod "webserver-deployment-84855cf797-p75z9" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-p75z9 webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-p75z9 09f7ef1a-444d-4286-b053-20ed0fb5b30e 3802647 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc005539e47 0xc005539e48}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-12 11:52:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.684: INFO: Pod "webserver-deployment-84855cf797-st8qx" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-st8qx webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-st8qx b2aacf96-fec9-498b-a368-0338e6cfbaf0 3802432 0 2020-05-12 11:51:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc002a9c027 0xc002a9c028}] [] [{kube-controller-manager Update v1 2020-05-12 11:51:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.176\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:51:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.176,StartTime:2020-05-12 11:51:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 11:51:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5e3db53bdc9744a298fc2228b02302cd3d3b3845ef6307a12e4d825b26c84ecd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.176,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.684: INFO: Pod "webserver-deployment-84855cf797-sxpc2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-sxpc2 webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-sxpc2 66eaa549-13ed-4b7d-b552-38447cf84b71 3802666 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc002a9c1d7 0xc002a9c1d8}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-12 11:52:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 11:52:21.685: INFO: Pod "webserver-deployment-84855cf797-vl5nm" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vl5nm webserver-deployment-84855cf797- deployment-7260 /api/v1/namespaces/deployment-7260/pods/webserver-deployment-84855cf797-vl5nm 9c589ba4-b787-4712-b6d5-fe4b4fd0e3c6 3802644 0 2020-05-12 11:52:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 9aca4981-5048-4de7-ad69-e71e2db1f186 0xc002a9c377 0xc002a9c378}] [] [{kube-controller-manager Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aca4981-5048-4de7-ad69-e71e2db1f186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 11:52:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkcr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkcr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkcr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 11:52:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-12 11:52:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:52:21.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7260" for this suite. • [SLOW TEST:37.556 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":216,"skipped":3732,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:52:22.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 12 11:52:25.110: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:25.839: INFO: Number of nodes with available pods: 0 May 12 11:52:25.839: INFO: Node latest-worker is running more than one daemon pod May 12 11:52:27.305: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:28.338: INFO: Number of nodes with available pods: 0 May 12 11:52:28.338: INFO: Node latest-worker is running more than one daemon pod May 12 11:52:30.135: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:30.513: INFO: Number of nodes with available pods: 0 May 12 11:52:30.513: INFO: Node latest-worker is running more than one daemon pod May 12 11:52:31.527: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:32.553: INFO: Number of nodes with available pods: 0 May 12 11:52:32.553: INFO: Node latest-worker is running more than one daemon pod May 12 11:52:33.379: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:34.007: INFO: Number of nodes with available pods: 0 May 12 11:52:34.007: INFO: Node latest-worker is running more than one daemon pod May 12 11:52:35.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:35.403: INFO: Number of nodes with available pods: 0 May 12 11:52:35.403: INFO: Node latest-worker is running more than one daemon pod May 12 11:52:35.934: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:35.940: INFO: Number of nodes with available pods: 0 May 12 11:52:35.940: INFO: Node latest-worker is running more than one daemon pod May 12 11:52:37.519: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:38.512: INFO: Number of nodes with available pods: 0 May 12 11:52:38.512: INFO: Node latest-worker is running more than one daemon pod May 12 11:52:39.559: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:40.297: INFO: Number of nodes with available pods: 0 May 12 11:52:40.298: INFO: Node latest-worker is running more than one daemon pod May 12 11:52:41.567: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:42.433: INFO: Number of nodes with available pods: 0 May 12 11:52:42.433: INFO: Node latest-worker is running more than one daemon pod May 12 11:52:43.734: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:44.086: INFO: Number of nodes with available pods: 1 May 12 11:52:44.086: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:52:45.233: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:45.840: INFO: Number of nodes with available pods: 1 May 12 11:52:45.840: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:52:46.950: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:47.045: INFO: Number of nodes with available pods: 1 May 12 11:52:47.045: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:52:47.936: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:47.941: INFO: Number of nodes with available pods: 1 May 12 11:52:47.941: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:52:48.944: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:49.067: INFO: Number of nodes with available pods: 2 May 12 11:52:49.067: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 12 11:52:49.780: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:49.905: INFO: Number of nodes with available pods: 1 May 12 11:52:49.905: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:52:50.912: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:50.971: INFO: Number of nodes with available pods: 1 May 12 11:52:50.971: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:52:51.930: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:51.954: INFO: Number of nodes with available pods: 1 May 12 11:52:51.954: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:52:53.014: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:53.099: INFO: Number of nodes with available pods: 1 May 12 11:52:53.099: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:52:54.169: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:54.711: INFO: Number of nodes with available pods: 1 May 12 11:52:54.711: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:52:54.932: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:55.414: INFO: Number of nodes with available pods: 1 May 12 11:52:55.414: INFO: Node latest-worker2 is running more than one daemon pod May 12 11:52:56.174: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:52:56.350: INFO: Number of nodes with available pods: 2 May 12 11:52:56.350: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9469, will wait for the garbage collector to delete the pods May 12 11:52:56.688: INFO: Deleting DaemonSet.extensions daemon-set took: 213.4079ms May 12 11:52:57.089: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.455013ms May 12 11:53:06.009: INFO: Number of nodes with available pods: 0 May 12 11:53:06.009: INFO: Number of running nodes: 0, number of available pods: 0 May 12 11:53:06.074: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9469/daemonsets","resourceVersion":"3803118"},"items":null} May 12 11:53:06.509: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9469/pods","resourceVersion":"3803120"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:53:07.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9469" for this suite. • [SLOW TEST:44.557 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":217,"skipped":3734,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:53:07.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-8495/secret-test-6e3bf03b-9bc4-49f1-85b0-c51bfa55d561 STEP: Creating a pod to test consume secrets May 12 11:53:09.073: INFO: Waiting up to 5m0s for pod "pod-configmaps-293993ff-2625-4c2a-ac6d-9200f59bbf08" in namespace "secrets-8495" to be "Succeeded or Failed" May 12 11:53:09.338: INFO: Pod "pod-configmaps-293993ff-2625-4c2a-ac6d-9200f59bbf08": Phase="Pending", Reason="", readiness=false. Elapsed: 265.730573ms May 12 11:53:11.582: INFO: Pod "pod-configmaps-293993ff-2625-4c2a-ac6d-9200f59bbf08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.509645023s May 12 11:53:13.869: INFO: Pod "pod-configmaps-293993ff-2625-4c2a-ac6d-9200f59bbf08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.796657549s May 12 11:53:15.883: INFO: Pod "pod-configmaps-293993ff-2625-4c2a-ac6d-9200f59bbf08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.810258732s May 12 11:53:18.134: INFO: Pod "pod-configmaps-293993ff-2625-4c2a-ac6d-9200f59bbf08": Phase="Running", Reason="", readiness=true. Elapsed: 9.061434311s May 12 11:53:20.432: INFO: Pod "pod-configmaps-293993ff-2625-4c2a-ac6d-9200f59bbf08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.359119625s STEP: Saw pod success May 12 11:53:20.432: INFO: Pod "pod-configmaps-293993ff-2625-4c2a-ac6d-9200f59bbf08" satisfied condition "Succeeded or Failed" May 12 11:53:20.651: INFO: Trying to get logs from node latest-worker pod pod-configmaps-293993ff-2625-4c2a-ac6d-9200f59bbf08 container env-test: STEP: delete the pod May 12 11:53:22.363: INFO: Waiting for pod pod-configmaps-293993ff-2625-4c2a-ac6d-9200f59bbf08 to disappear May 12 11:53:22.721: INFO: Pod pod-configmaps-293993ff-2625-4c2a-ac6d-9200f59bbf08 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:53:22.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8495" for this suite. • [SLOW TEST:17.154 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":218,"skipped":3752,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:53:24.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 11:53:33.379: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:53:33.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5688" for this suite. • [SLOW TEST:8.835 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":219,"skipped":3757,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:53:33.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-nlkl STEP: Creating a pod to test atomic-volume-subpath May 12 11:53:33.823: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nlkl" in namespace "subpath-3506" to be "Succeeded or Failed" May 12 11:53:33.855: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Pending", Reason="", readiness=false. Elapsed: 31.608748ms May 12 11:53:36.240: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.416956241s May 12 11:53:38.243: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Running", Reason="", readiness=true. Elapsed: 4.419594768s May 12 11:53:40.248: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Running", Reason="", readiness=true. Elapsed: 6.424332668s May 12 11:53:42.252: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Running", Reason="", readiness=true. Elapsed: 8.428787286s May 12 11:53:44.256: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Running", Reason="", readiness=true. Elapsed: 10.432400635s May 12 11:53:46.259: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Running", Reason="", readiness=true. Elapsed: 12.435351041s May 12 11:53:48.263: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Running", Reason="", readiness=true. Elapsed: 14.439640251s May 12 11:53:50.268: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Running", Reason="", readiness=true. Elapsed: 16.444361579s May 12 11:53:52.271: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Running", Reason="", readiness=true. Elapsed: 18.447471655s May 12 11:53:54.929: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Running", Reason="", readiness=true. Elapsed: 21.106267286s May 12 11:53:56.933: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Running", Reason="", readiness=true. Elapsed: 23.110123825s May 12 11:53:58.937: INFO: Pod "pod-subpath-test-configmap-nlkl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.113691967s STEP: Saw pod success May 12 11:53:58.937: INFO: Pod "pod-subpath-test-configmap-nlkl" satisfied condition "Succeeded or Failed" May 12 11:53:58.940: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-nlkl container test-container-subpath-configmap-nlkl: STEP: delete the pod May 12 11:53:59.029: INFO: Waiting for pod pod-subpath-test-configmap-nlkl to disappear May 12 11:53:59.046: INFO: Pod pod-subpath-test-configmap-nlkl no longer exists STEP: Deleting pod pod-subpath-test-configmap-nlkl May 12 11:53:59.047: INFO: Deleting pod "pod-subpath-test-configmap-nlkl" in namespace "subpath-3506" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:53:59.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3506" for this suite. • [SLOW TEST:25.616 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":220,"skipped":3766,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:53:59.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 12 11:54:00.277: INFO: Pod name wrapped-volume-race-53ba718c-07a5-4524-9ce6-594e8abc4c0a: Found 0 pods out of 5 May 12 11:54:05.334: INFO: Pod name wrapped-volume-race-53ba718c-07a5-4524-9ce6-594e8abc4c0a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-53ba718c-07a5-4524-9ce6-594e8abc4c0a in namespace emptydir-wrapper-1722, will wait for the garbage collector to delete the pods May 12 11:54:27.134: INFO: Deleting ReplicationController wrapped-volume-race-53ba718c-07a5-4524-9ce6-594e8abc4c0a took: 225.075464ms May 12 11:54:27.635: INFO: Terminating ReplicationController wrapped-volume-race-53ba718c-07a5-4524-9ce6-594e8abc4c0a pods took: 500.272295ms STEP: Creating RC which spawns configmap-volume pods May 12 11:54:48.790: INFO: Pod name wrapped-volume-race-fff9e5e7-2ed3-44de-b43f-86732164e336: Found 0 pods out of 5 May 12 11:54:54.815: INFO: Pod name wrapped-volume-race-fff9e5e7-2ed3-44de-b43f-86732164e336: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fff9e5e7-2ed3-44de-b43f-86732164e336 in namespace emptydir-wrapper-1722, will wait for the garbage collector to delete the pods May 12 11:55:12.283: INFO: Deleting ReplicationController wrapped-volume-race-fff9e5e7-2ed3-44de-b43f-86732164e336 took: 83.64218ms May 12 11:55:12.883: INFO: Terminating ReplicationController wrapped-volume-race-fff9e5e7-2ed3-44de-b43f-86732164e336 pods took: 600.301337ms STEP: Creating RC which spawns configmap-volume pods May 12 11:55:25.425: INFO: Pod name wrapped-volume-race-731e04bd-c747-4278-bfd7-e1d9b50583ae: Found 0 pods out of 5 May 12 11:55:30.455: INFO: Pod name wrapped-volume-race-731e04bd-c747-4278-bfd7-e1d9b50583ae: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-731e04bd-c747-4278-bfd7-e1d9b50583ae in namespace emptydir-wrapper-1722, will wait for the garbage collector to delete the pods May 12 11:55:49.115: INFO: Deleting ReplicationController wrapped-volume-race-731e04bd-c747-4278-bfd7-e1d9b50583ae took: 7.597501ms May 12 11:55:49.515: INFO: Terminating ReplicationController wrapped-volume-race-731e04bd-c747-4278-bfd7-e1d9b50583ae pods took: 400.249119ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:56:12.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1722" for this suite. • [SLOW TEST:133.835 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":221,"skipped":3782,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:56:12.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 11:56:13.321: INFO: Waiting up to 5m0s for pod "pod-c3ae48c3-8667-4777-9fc6-a73ce01e1c8e" in namespace "emptydir-1510" to be "Succeeded or Failed" May 12 11:56:13.361: INFO: Pod "pod-c3ae48c3-8667-4777-9fc6-a73ce01e1c8e": Phase="Pending", Reason="", readiness=false. Elapsed: 39.880812ms May 12 11:56:15.365: INFO: Pod "pod-c3ae48c3-8667-4777-9fc6-a73ce01e1c8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043918688s May 12 11:56:17.484: INFO: Pod "pod-c3ae48c3-8667-4777-9fc6-a73ce01e1c8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162782207s May 12 11:56:19.493: INFO: Pod "pod-c3ae48c3-8667-4777-9fc6-a73ce01e1c8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171615242s May 12 11:56:21.782: INFO: Pod "pod-c3ae48c3-8667-4777-9fc6-a73ce01e1c8e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.460913131s May 12 11:56:24.079: INFO: Pod "pod-c3ae48c3-8667-4777-9fc6-a73ce01e1c8e": Phase="Running", Reason="", readiness=true. Elapsed: 10.757707575s May 12 11:56:26.131: INFO: Pod "pod-c3ae48c3-8667-4777-9fc6-a73ce01e1c8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.809342363s STEP: Saw pod success May 12 11:56:26.131: INFO: Pod "pod-c3ae48c3-8667-4777-9fc6-a73ce01e1c8e" satisfied condition "Succeeded or Failed" May 12 11:56:26.200: INFO: Trying to get logs from node latest-worker pod pod-c3ae48c3-8667-4777-9fc6-a73ce01e1c8e container test-container: STEP: delete the pod May 12 11:56:26.484: INFO: Waiting for pod pod-c3ae48c3-8667-4777-9fc6-a73ce01e1c8e to disappear May 12 11:56:26.506: INFO: Pod pod-c3ae48c3-8667-4777-9fc6-a73ce01e1c8e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:56:26.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1510" for this suite. • [SLOW TEST:13.818 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":222,"skipped":3802,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:56:26.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:56:27.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-389" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":223,"skipped":3812,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:56:28.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:56:40.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4141" for this suite. • [SLOW TEST:12.321 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":224,"skipped":3817,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:56:40.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4525 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-4525 May 12 11:56:40.882: INFO: Found 0 stateful pods, waiting for 1 May 12 11:56:50.887: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 12 11:56:50.922: INFO: Deleting all statefulset in ns statefulset-4525 May 12 11:56:50.947: INFO: Scaling statefulset ss to 0 May 12 11:57:11.020: INFO: Waiting for statefulset status.replicas updated to 0 May 12 11:57:11.023: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:57:11.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4525" for this suite. • [SLOW TEST:30.472 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":225,"skipped":3826,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:57:11.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 11:57:13.230: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 11:57:15.236: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881433, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881433, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881433, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881432, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:57:17.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881433, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881433, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881433, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881432, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 11:57:20.367: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:57:20.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8718" for this suite. STEP: Destroying namespace "webhook-8718-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.100 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":226,"skipped":3846,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:57:21.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0512 11:57:35.111320 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 11:57:35.111: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:57:35.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1480" for this suite. • [SLOW TEST:13.917 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":227,"skipped":3857,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:57:35.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-9f8686c7-5d06-4a84-81da-ff4cc966bc81 STEP: Creating a pod to test consume configMaps May 12 11:57:36.987: INFO: Waiting up to 5m0s for pod "pod-configmaps-5c6b6847-2163-4ac5-b48a-4df329d522d8" in namespace "configmap-1654" to be "Succeeded or Failed" May 12 11:57:37.106: INFO: Pod "pod-configmaps-5c6b6847-2163-4ac5-b48a-4df329d522d8": Phase="Pending", Reason="", readiness=false. Elapsed: 118.675097ms May 12 11:57:39.176: INFO: Pod "pod-configmaps-5c6b6847-2163-4ac5-b48a-4df329d522d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188904139s May 12 11:57:41.385: INFO: Pod "pod-configmaps-5c6b6847-2163-4ac5-b48a-4df329d522d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398144732s May 12 11:57:43.584: INFO: Pod "pod-configmaps-5c6b6847-2163-4ac5-b48a-4df329d522d8": Phase="Running", Reason="", readiness=true. Elapsed: 6.596922979s May 12 11:57:45.631: INFO: Pod "pod-configmaps-5c6b6847-2163-4ac5-b48a-4df329d522d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.643676567s STEP: Saw pod success May 12 11:57:45.631: INFO: Pod "pod-configmaps-5c6b6847-2163-4ac5-b48a-4df329d522d8" satisfied condition "Succeeded or Failed" May 12 11:57:45.666: INFO: Trying to get logs from node latest-worker pod pod-configmaps-5c6b6847-2163-4ac5-b48a-4df329d522d8 container configmap-volume-test: STEP: delete the pod May 12 11:57:45.955: INFO: Waiting for pod pod-configmaps-5c6b6847-2163-4ac5-b48a-4df329d522d8 to disappear May 12 11:57:46.230: INFO: Pod pod-configmaps-5c6b6847-2163-4ac5-b48a-4df329d522d8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:57:46.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1654" for this suite. • [SLOW TEST:11.577 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":228,"skipped":3865,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:57:46.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 12 11:57:47.540: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 12 11:57:50.118: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 12 11:57:53.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881470, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881470, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881470, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881469, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:57:55.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881470, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881470, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881470, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881469, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:57:57.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881470, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881470, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881470, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881469, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:58:00.172: INFO: Waited 1.103051197s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:58:03.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1297" for this suite. • [SLOW TEST:16.623 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":229,"skipped":3886,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:58:03.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 11:58:06.062: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 11:58:08.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881486, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881485, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:58:10.389: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881486, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881485, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:58:12.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881486, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881485, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 11:58:15.207: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:58:15.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7643" for this suite. STEP: Destroying namespace "webhook-7643-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.236 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":230,"skipped":3886,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:58:15.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-13c3bed0-0f5a-4f14-806f-c1a3342fcfe1 STEP: Creating configMap with name cm-test-opt-upd-eb8fc92b-e942-493a-8202-a391e494cfe6 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-13c3bed0-0f5a-4f14-806f-c1a3342fcfe1 STEP: Updating configmap cm-test-opt-upd-eb8fc92b-e942-493a-8202-a391e494cfe6 STEP: Creating configMap with name cm-test-opt-create-c7b1f47e-90d6-4d79-a467-d391be9de85e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:58:30.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4477" for this suite. • [SLOW TEST:14.746 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":231,"skipped":3892,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:58:30.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4721d48a-a747-42cd-b5f9-f1a07227febe STEP: Creating a pod to test consume secrets May 12 11:58:30.390: INFO: Waiting up to 5m0s for pod "pod-secrets-f5e9ee8e-c5a1-46c9-a145-4f644f442bc5" in namespace "secrets-370" to be "Succeeded or Failed" May 12 11:58:30.407: INFO: Pod "pod-secrets-f5e9ee8e-c5a1-46c9-a145-4f644f442bc5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.691771ms May 12 11:58:33.016: INFO: Pod "pod-secrets-f5e9ee8e-c5a1-46c9-a145-4f644f442bc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.625860263s May 12 11:58:35.060: INFO: Pod "pod-secrets-f5e9ee8e-c5a1-46c9-a145-4f644f442bc5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.670172473s May 12 11:58:37.554: INFO: Pod "pod-secrets-f5e9ee8e-c5a1-46c9-a145-4f644f442bc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.163853673s STEP: Saw pod success May 12 11:58:37.554: INFO: Pod "pod-secrets-f5e9ee8e-c5a1-46c9-a145-4f644f442bc5" satisfied condition "Succeeded or Failed" May 12 11:58:37.557: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-f5e9ee8e-c5a1-46c9-a145-4f644f442bc5 container secret-volume-test: STEP: delete the pod May 12 11:58:38.165: INFO: Waiting for pod pod-secrets-f5e9ee8e-c5a1-46c9-a145-4f644f442bc5 to disappear May 12 11:58:38.380: INFO: Pod pod-secrets-f5e9ee8e-c5a1-46c9-a145-4f644f442bc5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:58:38.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-370" for this suite. • [SLOW TEST:8.171 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":232,"skipped":3898,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:58:38.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 12 11:58:39.059: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 12 11:58:52.247: INFO: >>> kubeConfig: /root/.kube/config May 12 11:58:54.214: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:59:04.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5739" for this suite. • [SLOW TEST:26.197 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":233,"skipped":3911,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:59:04.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 11:59:06.836: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 11:59:09.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881546, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881546, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881547, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881546, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:59:11.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881546, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881546, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881547, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881546, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 11:59:14.426: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:59:16.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9337" for this suite. STEP: Destroying namespace "webhook-9337-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.270 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":234,"skipped":3914,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:59:17.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9770/configmap-test-0892b014-396e-4561-b1d2-c4bee8826af5 STEP: Creating a pod to test consume configMaps May 12 11:59:19.413: INFO: Waiting up to 5m0s for pod "pod-configmaps-c0fc6cd6-3775-4d3d-b5ab-a26fc05bb279" in namespace "configmap-9770" to be "Succeeded or Failed" May 12 11:59:19.657: INFO: Pod "pod-configmaps-c0fc6cd6-3775-4d3d-b5ab-a26fc05bb279": Phase="Pending", Reason="", readiness=false. Elapsed: 244.155149ms May 12 11:59:21.795: INFO: Pod "pod-configmaps-c0fc6cd6-3775-4d3d-b5ab-a26fc05bb279": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381836489s May 12 11:59:23.979: INFO: Pod "pod-configmaps-c0fc6cd6-3775-4d3d-b5ab-a26fc05bb279": Phase="Pending", Reason="", readiness=false. Elapsed: 4.566515005s May 12 11:59:26.009: INFO: Pod "pod-configmaps-c0fc6cd6-3775-4d3d-b5ab-a26fc05bb279": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596376056s May 12 11:59:28.057: INFO: Pod "pod-configmaps-c0fc6cd6-3775-4d3d-b5ab-a26fc05bb279": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.644374529s STEP: Saw pod success May 12 11:59:28.057: INFO: Pod "pod-configmaps-c0fc6cd6-3775-4d3d-b5ab-a26fc05bb279" satisfied condition "Succeeded or Failed" May 12 11:59:28.073: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c0fc6cd6-3775-4d3d-b5ab-a26fc05bb279 container env-test: STEP: delete the pod May 12 11:59:28.154: INFO: Waiting for pod pod-configmaps-c0fc6cd6-3775-4d3d-b5ab-a26fc05bb279 to disappear May 12 11:59:28.219: INFO: Pod pod-configmaps-c0fc6cd6-3775-4d3d-b5ab-a26fc05bb279 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:59:28.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9770" for this suite. • [SLOW TEST:10.290 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":235,"skipped":3946,"failed":0} SSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:59:28.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 11:59:54.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2154" for this suite. • [SLOW TEST:26.567 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":236,"skipped":3950,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 11:59:54.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 11:59:55.432: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5ccda05-8cdc-4c20-9a39-e00a786468e4" in namespace "downward-api-9299" to be "Succeeded or Failed" May 12 11:59:55.760: INFO: Pod "downwardapi-volume-d5ccda05-8cdc-4c20-9a39-e00a786468e4": Phase="Pending", Reason="", readiness=false. Elapsed: 328.338359ms May 12 11:59:57.820: INFO: Pod "downwardapi-volume-d5ccda05-8cdc-4c20-9a39-e00a786468e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38820875s May 12 12:00:00.076: INFO: Pod "downwardapi-volume-d5ccda05-8cdc-4c20-9a39-e00a786468e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.644039401s May 12 12:00:02.226: INFO: Pod "downwardapi-volume-d5ccda05-8cdc-4c20-9a39-e00a786468e4": Phase="Running", Reason="", readiness=true. Elapsed: 6.793842904s May 12 12:00:04.229: INFO: Pod "downwardapi-volume-d5ccda05-8cdc-4c20-9a39-e00a786468e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.79720356s STEP: Saw pod success May 12 12:00:04.229: INFO: Pod "downwardapi-volume-d5ccda05-8cdc-4c20-9a39-e00a786468e4" satisfied condition "Succeeded or Failed" May 12 12:00:04.231: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d5ccda05-8cdc-4c20-9a39-e00a786468e4 container client-container: STEP: delete the pod May 12 12:00:04.341: INFO: Waiting for pod downwardapi-volume-d5ccda05-8cdc-4c20-9a39-e00a786468e4 to disappear May 12 12:00:04.358: INFO: Pod downwardapi-volume-d5ccda05-8cdc-4c20-9a39-e00a786468e4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:00:04.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9299" for this suite. • [SLOW TEST:9.587 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":3974,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:00:04.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:00:25.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2492" for this suite. • [SLOW TEST:20.706 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":238,"skipped":3975,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:00:25.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 12:00:26.891: INFO: Waiting up to 5m0s for pod "pod-cad77345-5dae-436a-a7ae-f71a022abcdb" in namespace "emptydir-7246" to be "Succeeded or Failed" May 12 12:00:27.323: INFO: Pod "pod-cad77345-5dae-436a-a7ae-f71a022abcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 431.991495ms May 12 12:00:29.462: INFO: Pod "pod-cad77345-5dae-436a-a7ae-f71a022abcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.571126519s May 12 12:00:31.465: INFO: Pod "pod-cad77345-5dae-436a-a7ae-f71a022abcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.574056363s May 12 12:00:33.509: INFO: Pod "pod-cad77345-5dae-436a-a7ae-f71a022abcdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.617236257s STEP: Saw pod success May 12 12:00:33.509: INFO: Pod "pod-cad77345-5dae-436a-a7ae-f71a022abcdb" satisfied condition "Succeeded or Failed" May 12 12:00:33.512: INFO: Trying to get logs from node latest-worker pod pod-cad77345-5dae-436a-a7ae-f71a022abcdb container test-container: STEP: delete the pod May 12 12:00:33.546: INFO: Waiting for pod pod-cad77345-5dae-436a-a7ae-f71a022abcdb to disappear May 12 12:00:33.563: INFO: Pod pod-cad77345-5dae-436a-a7ae-f71a022abcdb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:00:33.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7246" for this suite. • [SLOW TEST:8.478 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":239,"skipped":3991,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:00:33.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:00:35.043: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:00:37.052: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881635, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881635, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881636, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881634, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:00:39.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881635, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881635, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881636, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881634, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:00:41.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881635, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881635, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881636, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881634, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:00:44.081: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:00:44.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3701" for this suite. STEP: Destroying namespace "webhook-3701-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.623 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":240,"skipped":4001,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:00:44.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-d27p STEP: Creating a pod to test atomic-volume-subpath May 12 12:00:44.260: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-d27p" in namespace "subpath-7485" to be "Succeeded or Failed" May 12 12:00:44.277: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Pending", Reason="", readiness=false. Elapsed: 16.735945ms May 12 12:00:46.339: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078831689s May 12 12:00:48.342: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Running", Reason="", readiness=true. Elapsed: 4.082066302s May 12 12:00:50.377: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Running", Reason="", readiness=true. Elapsed: 6.116472647s May 12 12:00:52.380: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Running", Reason="", readiness=true. Elapsed: 8.120158625s May 12 12:00:54.771: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Running", Reason="", readiness=true. Elapsed: 10.510899557s May 12 12:00:56.776: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Running", Reason="", readiness=true. Elapsed: 12.515452136s May 12 12:00:58.981: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Running", Reason="", readiness=true. Elapsed: 14.720895626s May 12 12:01:00.992: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Running", Reason="", readiness=true. Elapsed: 16.731505127s May 12 12:01:02.996: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Running", Reason="", readiness=true. Elapsed: 18.735572578s May 12 12:01:05.008: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Running", Reason="", readiness=true. Elapsed: 20.747625042s May 12 12:01:07.012: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Running", Reason="", readiness=true. Elapsed: 22.751786724s May 12 12:01:09.015: INFO: Pod "pod-subpath-test-projected-d27p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.755079236s STEP: Saw pod success May 12 12:01:09.015: INFO: Pod "pod-subpath-test-projected-d27p" satisfied condition "Succeeded or Failed" May 12 12:01:09.018: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-d27p container test-container-subpath-projected-d27p: STEP: delete the pod May 12 12:01:09.183: INFO: Waiting for pod pod-subpath-test-projected-d27p to disappear May 12 12:01:09.216: INFO: Pod pod-subpath-test-projected-d27p no longer exists STEP: Deleting pod pod-subpath-test-projected-d27p May 12 12:01:09.216: INFO: Deleting pod "pod-subpath-test-projected-d27p" in namespace "subpath-7485" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:01:09.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7485" for this suite. • [SLOW TEST:25.217 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":241,"skipped":4012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:01:09.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 12 12:01:09.765: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:01:26.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8363" for this suite. • [SLOW TEST:17.526 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":242,"skipped":4048,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:01:26.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-3bd30c45-ca6f-40fb-87e2-7dfd243f3835 May 12 12:01:27.674: INFO: Pod name my-hostname-basic-3bd30c45-ca6f-40fb-87e2-7dfd243f3835: Found 0 pods out of 1 May 12 12:01:32.678: INFO: Pod name my-hostname-basic-3bd30c45-ca6f-40fb-87e2-7dfd243f3835: Found 1 pods out of 1 May 12 12:01:32.678: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3bd30c45-ca6f-40fb-87e2-7dfd243f3835" are running May 12 12:01:32.720: INFO: Pod "my-hostname-basic-3bd30c45-ca6f-40fb-87e2-7dfd243f3835-zlblt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:01:27 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:01:31 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:01:31 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:01:27 +0000 UTC Reason: Message:}]) May 12 12:01:32.720: INFO: Trying to dial the pod May 12 12:01:37.733: INFO: Controller my-hostname-basic-3bd30c45-ca6f-40fb-87e2-7dfd243f3835: Got expected result from replica 1 [my-hostname-basic-3bd30c45-ca6f-40fb-87e2-7dfd243f3835-zlblt]: "my-hostname-basic-3bd30c45-ca6f-40fb-87e2-7dfd243f3835-zlblt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:01:37.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-309" for this suite. • [SLOW TEST:10.802 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":243,"skipped":4051,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:01:37.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-4f20f018-eef7-4c1d-acdb-4d4ca48c06ae [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:01:37.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-47" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":244,"skipped":4095,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:01:37.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5981 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5981;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5981 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5981;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5981.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5981.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5981.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5981.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5981.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5981.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5981.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5981.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5981.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5981.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5981.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 60.66.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.66.60_udp@PTR;check="$$(dig +tcp +noall +answer +search 60.66.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.66.60_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5981 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5981;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5981 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5981;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5981.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5981.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5981.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5981.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5981.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5981.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5981.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5981.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5981.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5981.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5981.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5981.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5981.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 60.66.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.66.60_udp@PTR;check="$$(dig +tcp +noall +answer +search 60.66.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.66.60_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 12:01:58.675: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:58.678: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:58.681: INFO: Unable to read wheezy_udp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:58.683: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:58.686: INFO: Unable to read wheezy_udp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:58.689: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:58.994: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:59.101: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:59.344: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:59.348: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:59.350: INFO: Unable to read jessie_udp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:59.353: INFO: Unable to read jessie_tcp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:59.355: INFO: Unable to read jessie_udp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:59.358: INFO: Unable to read jessie_tcp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:59.361: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:59.364: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:01:59.378: INFO: Lookups using dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5981 wheezy_tcp@dns-test-service.dns-5981 wheezy_udp@dns-test-service.dns-5981.svc wheezy_tcp@dns-test-service.dns-5981.svc wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5981 jessie_tcp@dns-test-service.dns-5981 jessie_udp@dns-test-service.dns-5981.svc jessie_tcp@dns-test-service.dns-5981.svc jessie_udp@_http._tcp.dns-test-service.dns-5981.svc jessie_tcp@_http._tcp.dns-test-service.dns-5981.svc] May 12 12:02:04.382: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.385: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.388: INFO: Unable to read wheezy_udp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.391: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.394: INFO: Unable to read wheezy_udp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.396: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.398: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.400: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.418: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.421: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.423: INFO: Unable to read jessie_udp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.425: INFO: Unable to read jessie_tcp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.427: INFO: Unable to read jessie_udp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.429: INFO: Unable to read jessie_tcp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.432: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.434: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:04.457: INFO: Lookups using dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5981 wheezy_tcp@dns-test-service.dns-5981 wheezy_udp@dns-test-service.dns-5981.svc wheezy_tcp@dns-test-service.dns-5981.svc wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5981 jessie_tcp@dns-test-service.dns-5981 jessie_udp@dns-test-service.dns-5981.svc jessie_tcp@dns-test-service.dns-5981.svc jessie_udp@_http._tcp.dns-test-service.dns-5981.svc jessie_tcp@_http._tcp.dns-test-service.dns-5981.svc] May 12 12:02:09.383: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.386: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.389: INFO: Unable to read wheezy_udp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.392: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.395: INFO: Unable to read wheezy_udp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.398: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.400: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.403: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.421: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.424: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.426: INFO: Unable to read jessie_udp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.429: INFO: Unable to read jessie_tcp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.454: INFO: Unable to read jessie_udp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.502: INFO: Unable to read jessie_tcp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.505: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.507: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:09.522: INFO: Lookups using dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5981 wheezy_tcp@dns-test-service.dns-5981 wheezy_udp@dns-test-service.dns-5981.svc wheezy_tcp@dns-test-service.dns-5981.svc wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5981 jessie_tcp@dns-test-service.dns-5981 jessie_udp@dns-test-service.dns-5981.svc jessie_tcp@dns-test-service.dns-5981.svc jessie_udp@_http._tcp.dns-test-service.dns-5981.svc jessie_tcp@_http._tcp.dns-test-service.dns-5981.svc] May 12 12:02:14.528: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:14.585: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:14.590: INFO: Unable to read wheezy_udp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:14.735: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:14.740: INFO: Unable to read wheezy_udp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:14.975: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:14.979: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:14.983: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:15.120: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:15.123: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:15.125: INFO: Unable to read jessie_udp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:15.127: INFO: Unable to read jessie_tcp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:15.130: INFO: Unable to read jessie_udp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:15.133: INFO: Unable to read jessie_tcp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:15.209: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:15.212: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:15.233: INFO: Lookups using dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5981 wheezy_tcp@dns-test-service.dns-5981 wheezy_udp@dns-test-service.dns-5981.svc wheezy_tcp@dns-test-service.dns-5981.svc wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5981 jessie_tcp@dns-test-service.dns-5981 jessie_udp@dns-test-service.dns-5981.svc jessie_tcp@dns-test-service.dns-5981.svc jessie_udp@_http._tcp.dns-test-service.dns-5981.svc jessie_tcp@_http._tcp.dns-test-service.dns-5981.svc] May 12 12:02:19.599: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.603: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.607: INFO: Unable to read wheezy_udp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.609: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.612: INFO: Unable to read wheezy_udp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.614: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.616: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.619: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.640: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.643: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.646: INFO: Unable to read jessie_udp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.648: INFO: Unable to read jessie_tcp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.650: INFO: Unable to read jessie_udp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.652: INFO: Unable to read jessie_tcp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.654: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.656: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:19.855: INFO: Lookups using dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5981 wheezy_tcp@dns-test-service.dns-5981 wheezy_udp@dns-test-service.dns-5981.svc wheezy_tcp@dns-test-service.dns-5981.svc wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5981 jessie_tcp@dns-test-service.dns-5981 jessie_udp@dns-test-service.dns-5981.svc jessie_tcp@dns-test-service.dns-5981.svc jessie_udp@_http._tcp.dns-test-service.dns-5981.svc jessie_tcp@_http._tcp.dns-test-service.dns-5981.svc] May 12 12:02:24.388: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:24.392: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:24.400: INFO: Unable to read wheezy_udp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:24.489: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5981 from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:24.514: INFO: Unable to read wheezy_udp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:24.516: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:24.520: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:24.523: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc from pod dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01: the server could not find the requested resource (get pods dns-test-b897c4ec-2029-4726-9526-34bf57405c01) May 12 12:02:27.171: INFO: Lookups using dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5981 wheezy_tcp@dns-test-service.dns-5981 wheezy_udp@dns-test-service.dns-5981.svc wheezy_tcp@dns-test-service.dns-5981.svc wheezy_udp@_http._tcp.dns-test-service.dns-5981.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5981.svc] May 12 12:02:29.643: INFO: DNS probes using dns-5981/dns-test-b897c4ec-2029-4726-9526-34bf57405c01 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:02:31.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5981" for this suite. • [SLOW TEST:53.339 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":245,"skipped":4104,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:02:31.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-9eafe554-bc96-4039-b8bb-6ec1d9e7e6e7 STEP: Creating a pod to test consume configMaps May 12 12:02:31.534: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f1271001-2092-47b5-a778-f0f337810ab0" in namespace "projected-4374" to be "Succeeded or Failed" May 12 12:02:31.700: INFO: Pod "pod-projected-configmaps-f1271001-2092-47b5-a778-f0f337810ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 165.810946ms May 12 12:02:33.753: INFO: Pod "pod-projected-configmaps-f1271001-2092-47b5-a778-f0f337810ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219158306s May 12 12:02:36.346: INFO: Pod "pod-projected-configmaps-f1271001-2092-47b5-a778-f0f337810ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.812422722s May 12 12:02:38.395: INFO: Pod "pod-projected-configmaps-f1271001-2092-47b5-a778-f0f337810ab0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.86089675s STEP: Saw pod success May 12 12:02:38.395: INFO: Pod "pod-projected-configmaps-f1271001-2092-47b5-a778-f0f337810ab0" satisfied condition "Succeeded or Failed" May 12 12:02:38.423: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-f1271001-2092-47b5-a778-f0f337810ab0 container projected-configmap-volume-test: STEP: delete the pod May 12 12:02:38.581: INFO: Waiting for pod pod-projected-configmaps-f1271001-2092-47b5-a778-f0f337810ab0 to disappear May 12 12:02:38.641: INFO: Pod pod-projected-configmaps-f1271001-2092-47b5-a778-f0f337810ab0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:02:38.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4374" for this suite. • [SLOW TEST:7.756 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":246,"skipped":4115,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:02:39.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 12:02:39.457: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-2cff6f0c-4042-4402-afb2-95c563853906" in namespace "security-context-test-5826" to be "Succeeded or Failed" May 12 12:02:39.491: INFO: Pod "busybox-readonly-false-2cff6f0c-4042-4402-afb2-95c563853906": Phase="Pending", Reason="", readiness=false. Elapsed: 33.344215ms May 12 12:02:41.495: INFO: Pod "busybox-readonly-false-2cff6f0c-4042-4402-afb2-95c563853906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037627844s May 12 12:02:43.711: INFO: Pod "busybox-readonly-false-2cff6f0c-4042-4402-afb2-95c563853906": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253712729s May 12 12:02:45.715: INFO: Pod "busybox-readonly-false-2cff6f0c-4042-4402-afb2-95c563853906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.257516873s May 12 12:02:45.715: INFO: Pod "busybox-readonly-false-2cff6f0c-4042-4402-afb2-95c563853906" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:02:45.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5826" for this suite. • [SLOW TEST:6.661 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":247,"skipped":4121,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:02:45.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-b1ceb816-f167-44b5-919a-01382a3b1027 STEP: Creating a pod to test consume secrets May 12 12:02:46.408: INFO: Waiting up to 5m0s for pod "pod-secrets-14bd23b4-75c9-470f-aecd-f1a9777dfc29" in namespace "secrets-1885" to be "Succeeded or Failed" May 12 12:02:46.594: INFO: Pod "pod-secrets-14bd23b4-75c9-470f-aecd-f1a9777dfc29": Phase="Pending", Reason="", readiness=false. Elapsed: 185.361346ms May 12 12:02:48.802: INFO: Pod "pod-secrets-14bd23b4-75c9-470f-aecd-f1a9777dfc29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393147021s May 12 12:02:50.880: INFO: Pod "pod-secrets-14bd23b4-75c9-470f-aecd-f1a9777dfc29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471479735s May 12 12:02:52.883: INFO: Pod "pod-secrets-14bd23b4-75c9-470f-aecd-f1a9777dfc29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.474999049s STEP: Saw pod success May 12 12:02:52.883: INFO: Pod "pod-secrets-14bd23b4-75c9-470f-aecd-f1a9777dfc29" satisfied condition "Succeeded or Failed" May 12 12:02:52.886: INFO: Trying to get logs from node latest-worker pod pod-secrets-14bd23b4-75c9-470f-aecd-f1a9777dfc29 container secret-volume-test: STEP: delete the pod May 12 12:02:52.983: INFO: Waiting for pod pod-secrets-14bd23b4-75c9-470f-aecd-f1a9777dfc29 to disappear May 12 12:02:53.035: INFO: Pod pod-secrets-14bd23b4-75c9-470f-aecd-f1a9777dfc29 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:02:53.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1885" for this suite. • [SLOW TEST:7.391 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":248,"skipped":4136,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:02:53.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 12:02:53.199: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:02:56.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2802" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":249,"skipped":4140,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:02:56.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-92a4c7a9-9adb-4b47-a82c-a6d1f35801f2 STEP: updating the pod May 12 12:03:15.237: INFO: Successfully updated pod "var-expansion-92a4c7a9-9adb-4b47-a82c-a6d1f35801f2" STEP: waiting for pod and container restart STEP: Failing liveness probe May 12 12:03:15.260: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-3623 PodName:var-expansion-92a4c7a9-9adb-4b47-a82c-a6d1f35801f2 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:03:15.260: INFO: >>> kubeConfig: /root/.kube/config I0512 12:03:15.288601 7 log.go:172] (0xc002a7f970) (0xc001b18be0) Create stream I0512 12:03:15.288644 7 log.go:172] (0xc002a7f970) (0xc001b18be0) Stream added, broadcasting: 1 I0512 12:03:15.290535 7 log.go:172] (0xc002a7f970) Reply frame received for 1 I0512 12:03:15.290564 7 log.go:172] (0xc002a7f970) (0xc001b18d20) Create stream I0512 12:03:15.290574 7 log.go:172] (0xc002a7f970) (0xc001b18d20) Stream added, broadcasting: 3 I0512 12:03:15.291335 7 log.go:172] (0xc002a7f970) Reply frame received for 3 I0512 12:03:15.291368 7 log.go:172] (0xc002a7f970) (0xc0012c8000) Create stream I0512 12:03:15.291384 7 log.go:172] (0xc002a7f970) (0xc0012c8000) Stream added, broadcasting: 5 I0512 12:03:15.292022 7 log.go:172] (0xc002a7f970) Reply frame received for 5 I0512 12:03:15.345994 7 log.go:172] (0xc002a7f970) Data frame received for 3 I0512 12:03:15.346045 7 log.go:172] (0xc001b18d20) (3) Data frame handling I0512 12:03:15.346174 7 log.go:172] (0xc002a7f970) Data frame received for 5 I0512 12:03:15.346201 7 log.go:172] (0xc0012c8000) (5) Data frame handling I0512 12:03:15.347505 7 log.go:172] (0xc002a7f970) Data frame received for 1 I0512 12:03:15.347537 7 log.go:172] (0xc001b18be0) (1) Data frame handling I0512 12:03:15.347577 7 log.go:172] (0xc001b18be0) (1) Data frame sent I0512 12:03:15.347597 7 log.go:172] (0xc002a7f970) (0xc001b18be0) Stream removed, broadcasting: 1 I0512 12:03:15.347619 7 log.go:172] (0xc002a7f970) Go away received I0512 12:03:15.347798 7 log.go:172] (0xc002a7f970) (0xc001b18be0) Stream removed, broadcasting: 1 I0512 12:03:15.347826 7 log.go:172] (0xc002a7f970) (0xc001b18d20) Stream removed, broadcasting: 3 I0512 12:03:15.347846 7 log.go:172] (0xc002a7f970) (0xc0012c8000) Stream removed, broadcasting: 5 May 12 12:03:15.347: INFO: Pod exec output: / STEP: Waiting for container to restart May 12 12:03:15.370: INFO: Container dapi-container, restarts: 0 May 12 12:03:25.374: INFO: Container dapi-container, restarts: 0 May 12 12:03:35.374: INFO: Container dapi-container, restarts: 0 May 12 12:03:45.373: INFO: Container dapi-container, restarts: 0 May 12 12:03:55.374: INFO: Container dapi-container, restarts: 1 May 12 12:03:55.374: INFO: Container has restart count: 1 STEP: Rewriting the file May 12 12:03:55.374: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-3623 PodName:var-expansion-92a4c7a9-9adb-4b47-a82c-a6d1f35801f2 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:03:55.374: INFO: >>> kubeConfig: /root/.kube/config I0512 12:03:55.407354 7 log.go:172] (0xc002d19810) (0xc001f246e0) Create stream I0512 12:03:55.407379 7 log.go:172] (0xc002d19810) (0xc001f246e0) Stream added, broadcasting: 1 I0512 12:03:55.409501 7 log.go:172] (0xc002d19810) Reply frame received for 1 I0512 12:03:55.409553 7 log.go:172] (0xc002d19810) (0xc001b18e60) Create stream I0512 12:03:55.409573 7 log.go:172] (0xc002d19810) (0xc001b18e60) Stream added, broadcasting: 3 I0512 12:03:55.410371 7 log.go:172] (0xc002d19810) Reply frame received for 3 I0512 12:03:55.410410 7 log.go:172] (0xc002d19810) (0xc0012c8140) Create stream I0512 12:03:55.410424 7 log.go:172] (0xc002d19810) (0xc0012c8140) Stream added, broadcasting: 5 I0512 12:03:55.411042 7 log.go:172] (0xc002d19810) Reply frame received for 5 I0512 12:03:55.473880 7 log.go:172] (0xc002d19810) Data frame received for 5 I0512 12:03:55.473902 7 log.go:172] (0xc0012c8140) (5) Data frame handling I0512 12:03:55.473937 7 log.go:172] (0xc002d19810) Data frame received for 3 I0512 12:03:55.473968 7 log.go:172] (0xc001b18e60) (3) Data frame handling I0512 12:03:55.475265 7 log.go:172] (0xc002d19810) Data frame received for 1 I0512 12:03:55.475298 7 log.go:172] (0xc001f246e0) (1) Data frame handling I0512 12:03:55.475309 7 log.go:172] (0xc001f246e0) (1) Data frame sent I0512 12:03:55.475318 7 log.go:172] (0xc002d19810) (0xc001f246e0) Stream removed, broadcasting: 1 I0512 12:03:55.475328 7 log.go:172] (0xc002d19810) Go away received I0512 12:03:55.475435 7 log.go:172] (0xc002d19810) (0xc001f246e0) Stream removed, broadcasting: 1 I0512 12:03:55.475464 7 log.go:172] (0xc002d19810) (0xc001b18e60) Stream removed, broadcasting: 3 I0512 12:03:55.475480 7 log.go:172] (0xc002d19810) (0xc0012c8140) Stream removed, broadcasting: 5 May 12 12:03:55.475: INFO: Exec stderr: "" May 12 12:03:55.475: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 12 12:04:27.555: INFO: Container has restart count: 2 May 12 12:05:29.554: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 12 12:05:29.556: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-3623 PodName:var-expansion-92a4c7a9-9adb-4b47-a82c-a6d1f35801f2 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:05:29.556: INFO: >>> kubeConfig: /root/.kube/config I0512 12:05:29.604054 7 log.go:172] (0xc002d19810) (0xc001037860) Create stream I0512 12:05:29.604085 7 log.go:172] (0xc002d19810) (0xc001037860) Stream added, broadcasting: 1 I0512 12:05:29.605510 7 log.go:172] (0xc002d19810) Reply frame received for 1 I0512 12:05:29.605546 7 log.go:172] (0xc002d19810) (0xc001389540) Create stream I0512 12:05:29.605559 7 log.go:172] (0xc002d19810) (0xc001389540) Stream added, broadcasting: 3 I0512 12:05:29.606286 7 log.go:172] (0xc002d19810) Reply frame received for 3 I0512 12:05:29.606322 7 log.go:172] (0xc002d19810) (0xc0013895e0) Create stream I0512 12:05:29.606334 7 log.go:172] (0xc002d19810) (0xc0013895e0) Stream added, broadcasting: 5 I0512 12:05:29.607101 7 log.go:172] (0xc002d19810) Reply frame received for 5 I0512 12:05:29.654887 7 log.go:172] (0xc002d19810) Data frame received for 3 I0512 12:05:29.654905 7 log.go:172] (0xc001389540) (3) Data frame handling I0512 12:05:29.655048 7 log.go:172] (0xc002d19810) Data frame received for 5 I0512 12:05:29.655056 7 log.go:172] (0xc0013895e0) (5) Data frame handling I0512 12:05:29.656348 7 log.go:172] (0xc002d19810) Data frame received for 1 I0512 12:05:29.656358 7 log.go:172] (0xc001037860) (1) Data frame handling I0512 12:05:29.656364 7 log.go:172] (0xc001037860) (1) Data frame sent I0512 12:05:29.656371 7 log.go:172] (0xc002d19810) (0xc001037860) Stream removed, broadcasting: 1 I0512 12:05:29.656438 7 log.go:172] (0xc002d19810) (0xc001037860) Stream removed, broadcasting: 1 I0512 12:05:29.656453 7 log.go:172] (0xc002d19810) (0xc001389540) Stream removed, broadcasting: 3 I0512 12:05:29.656596 7 log.go:172] (0xc002d19810) Go away received I0512 12:05:29.656647 7 log.go:172] (0xc002d19810) (0xc0013895e0) Stream removed, broadcasting: 5 May 12 12:05:29.689: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-3623 PodName:var-expansion-92a4c7a9-9adb-4b47-a82c-a6d1f35801f2 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 12:05:29.689: INFO: >>> kubeConfig: /root/.kube/config I0512 12:05:29.716175 7 log.go:172] (0xc005796370) (0xc00082cdc0) Create stream I0512 12:05:29.716201 7 log.go:172] (0xc005796370) (0xc00082cdc0) Stream added, broadcasting: 1 I0512 12:05:29.717291 7 log.go:172] (0xc005796370) Reply frame received for 1 I0512 12:05:29.717311 7 log.go:172] (0xc005796370) (0xc001037b80) Create stream I0512 12:05:29.717316 7 log.go:172] (0xc005796370) (0xc001037b80) Stream added, broadcasting: 3 I0512 12:05:29.717959 7 log.go:172] (0xc005796370) Reply frame received for 3 I0512 12:05:29.717993 7 log.go:172] (0xc005796370) (0xc00082d040) Create stream I0512 12:05:29.718003 7 log.go:172] (0xc005796370) (0xc00082d040) Stream added, broadcasting: 5 I0512 12:05:29.718542 7 log.go:172] (0xc005796370) Reply frame received for 5 I0512 12:05:29.787214 7 log.go:172] (0xc005796370) Data frame received for 3 I0512 12:05:29.787255 7 log.go:172] (0xc001037b80) (3) Data frame handling I0512 12:05:29.787294 7 log.go:172] (0xc005796370) Data frame received for 5 I0512 12:05:29.787317 7 log.go:172] (0xc00082d040) (5) Data frame handling I0512 12:05:29.788550 7 log.go:172] (0xc005796370) Data frame received for 1 I0512 12:05:29.788589 7 log.go:172] (0xc00082cdc0) (1) Data frame handling I0512 12:05:29.788621 7 log.go:172] (0xc00082cdc0) (1) Data frame sent I0512 12:05:29.788655 7 log.go:172] (0xc005796370) (0xc00082cdc0) Stream removed, broadcasting: 1 I0512 12:05:29.788700 7 log.go:172] (0xc005796370) Go away received I0512 12:05:29.788749 7 log.go:172] (0xc005796370) (0xc00082cdc0) Stream removed, broadcasting: 1 I0512 12:05:29.788826 7 log.go:172] (0xc005796370) (0xc001037b80) Stream removed, broadcasting: 3 I0512 12:05:29.788850 7 log.go:172] (0xc005796370) (0xc00082d040) Stream removed, broadcasting: 5 May 12 12:05:29.788: INFO: Deleting pod "var-expansion-92a4c7a9-9adb-4b47-a82c-a6d1f35801f2" in namespace "var-expansion-3623" May 12 12:05:29.794: INFO: Wait up to 5m0s for pod "var-expansion-92a4c7a9-9adb-4b47-a82c-a6d1f35801f2" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:06:05.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3623" for this suite. • [SLOW TEST:189.379 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":250,"skipped":4150,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:06:05.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-568 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-568 to expose endpoints map[] May 12 12:06:06.304: INFO: Get endpoints failed (37.544155ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 12 12:06:07.308: INFO: successfully validated that service endpoint-test2 in namespace services-568 exposes endpoints map[] (1.041257651s elapsed) STEP: Creating pod pod1 in namespace services-568 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-568 to expose endpoints map[pod1:[80]] May 12 12:06:11.834: INFO: successfully validated that service endpoint-test2 in namespace services-568 exposes endpoints map[pod1:[80]] (4.518905071s elapsed) STEP: Creating pod pod2 in namespace services-568 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-568 to expose endpoints map[pod1:[80] pod2:[80]] May 12 12:06:16.162: INFO: successfully validated that service endpoint-test2 in namespace services-568 exposes endpoints map[pod1:[80] pod2:[80]] (4.324006817s elapsed) STEP: Deleting pod pod1 in namespace services-568 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-568 to expose endpoints map[pod2:[80]] May 12 12:06:17.251: INFO: successfully validated that service endpoint-test2 in namespace services-568 exposes endpoints map[pod2:[80]] (1.083836259s elapsed) STEP: Deleting pod pod2 in namespace services-568 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-568 to expose endpoints map[] May 12 12:06:18.318: INFO: successfully validated that service endpoint-test2 in namespace services-568 exposes endpoints map[] (1.062403743s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:06:18.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-568" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.735 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":251,"skipped":4172,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:06:18.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 12:06:18.727: INFO: Waiting up to 5m0s for pod "downwardapi-volume-679ce0af-018e-444f-91ea-ff407d5385f1" in namespace "projected-8652" to be "Succeeded or Failed" May 12 12:06:18.743: INFO: Pod "downwardapi-volume-679ce0af-018e-444f-91ea-ff407d5385f1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.880738ms May 12 12:06:20.747: INFO: Pod "downwardapi-volume-679ce0af-018e-444f-91ea-ff407d5385f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019612921s May 12 12:06:23.247: INFO: Pod "downwardapi-volume-679ce0af-018e-444f-91ea-ff407d5385f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.520180834s STEP: Saw pod success May 12 12:06:23.247: INFO: Pod "downwardapi-volume-679ce0af-018e-444f-91ea-ff407d5385f1" satisfied condition "Succeeded or Failed" May 12 12:06:23.251: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-679ce0af-018e-444f-91ea-ff407d5385f1 container client-container: STEP: delete the pod May 12 12:06:23.339: INFO: Waiting for pod downwardapi-volume-679ce0af-018e-444f-91ea-ff407d5385f1 to disappear May 12 12:06:23.402: INFO: Pod downwardapi-volume-679ce0af-018e-444f-91ea-ff407d5385f1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:06:23.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8652" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":252,"skipped":4178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:06:23.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-b023b864-c478-49d5-a64c-7792afc786c0 in namespace container-probe-5168 May 12 12:06:27.695: INFO: Started pod liveness-b023b864-c478-49d5-a64c-7792afc786c0 in namespace container-probe-5168 STEP: checking the pod's current state and verifying that restartCount is present May 12 12:06:27.698: INFO: Initial restart count of pod liveness-b023b864-c478-49d5-a64c-7792afc786c0 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:10:29.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5168" for this suite. • [SLOW TEST:246.210 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":253,"skipped":4213,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:10:29.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:10:47.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9868" for this suite. STEP: Destroying namespace "nsdeletetest-6712" for this suite. May 12 12:10:47.940: INFO: Namespace nsdeletetest-6712 was already deleted STEP: Destroying namespace "nsdeletetest-3237" for this suite. • [SLOW TEST:18.323 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":254,"skipped":4223,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:10:47.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6843 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6843 STEP: creating replication controller externalsvc in namespace services-6843 I0512 12:10:48.220813 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6843, replica count: 2 I0512 12:10:51.271204 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 12:10:54.271403 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 12 12:10:54.320: INFO: Creating new exec pod May 12 12:10:58.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6843 execpod9px87 -- /bin/sh -x -c nslookup clusterip-service' May 12 12:11:02.290: INFO: stderr: "I0512 12:11:02.196098 4092 log.go:172] (0xc00041e000) (0xc00069ed20) Create stream\nI0512 12:11:02.196155 4092 log.go:172] (0xc00041e000) (0xc00069ed20) Stream added, broadcasting: 1\nI0512 12:11:02.198755 4092 log.go:172] (0xc00041e000) Reply frame received for 1\nI0512 12:11:02.198790 4092 log.go:172] (0xc00041e000) (0xc0006925a0) Create stream\nI0512 12:11:02.198801 4092 log.go:172] (0xc00041e000) (0xc0006925a0) Stream added, broadcasting: 3\nI0512 12:11:02.199730 4092 log.go:172] (0xc00041e000) Reply frame received for 3\nI0512 12:11:02.199761 4092 log.go:172] (0xc00041e000) (0xc000692e60) Create stream\nI0512 12:11:02.199768 4092 log.go:172] (0xc00041e000) (0xc000692e60) Stream added, broadcasting: 5\nI0512 12:11:02.200604 4092 log.go:172] (0xc00041e000) Reply frame received for 5\nI0512 12:11:02.278055 4092 log.go:172] (0xc00041e000) Data frame received for 5\nI0512 12:11:02.278077 4092 log.go:172] (0xc000692e60) (5) Data frame handling\nI0512 12:11:02.278088 4092 log.go:172] (0xc000692e60) (5) Data frame sent\n+ nslookup clusterip-service\nI0512 12:11:02.283259 4092 log.go:172] (0xc00041e000) Data frame received for 3\nI0512 12:11:02.283284 4092 log.go:172] (0xc0006925a0) (3) Data frame handling\nI0512 12:11:02.283301 4092 log.go:172] (0xc0006925a0) (3) Data frame sent\nI0512 12:11:02.283829 4092 log.go:172] (0xc00041e000) Data frame received for 3\nI0512 12:11:02.283840 4092 log.go:172] (0xc0006925a0) (3) Data frame handling\nI0512 12:11:02.283850 4092 log.go:172] (0xc0006925a0) (3) Data frame sent\nI0512 12:11:02.284214 4092 log.go:172] (0xc00041e000) Data frame received for 3\nI0512 12:11:02.284233 4092 log.go:172] (0xc0006925a0) (3) Data frame handling\nI0512 12:11:02.284351 4092 log.go:172] (0xc00041e000) Data frame received for 5\nI0512 12:11:02.284364 4092 log.go:172] (0xc000692e60) (5) Data frame handling\nI0512 12:11:02.285961 4092 log.go:172] (0xc00041e000) Data frame received for 1\nI0512 12:11:02.285982 4092 log.go:172] (0xc00069ed20) (1) Data frame handling\nI0512 12:11:02.286000 4092 log.go:172] (0xc00069ed20) (1) Data frame sent\nI0512 12:11:02.286018 4092 log.go:172] (0xc00041e000) (0xc00069ed20) Stream removed, broadcasting: 1\nI0512 12:11:02.286214 4092 log.go:172] (0xc00041e000) Go away received\nI0512 12:11:02.286330 4092 log.go:172] (0xc00041e000) (0xc00069ed20) Stream removed, broadcasting: 1\nI0512 12:11:02.286345 4092 log.go:172] (0xc00041e000) (0xc0006925a0) Stream removed, broadcasting: 3\nI0512 12:11:02.286352 4092 log.go:172] (0xc00041e000) (0xc000692e60) Stream removed, broadcasting: 5\n" May 12 12:11:02.291: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6843.svc.cluster.local\tcanonical name = externalsvc.services-6843.svc.cluster.local.\nName:\texternalsvc.services-6843.svc.cluster.local\nAddress: 10.99.18.3\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6843, will wait for the garbage collector to delete the pods May 12 12:11:02.348: INFO: Deleting ReplicationController externalsvc took: 4.835755ms May 12 12:11:02.749: INFO: Terminating ReplicationController externalsvc pods took: 400.390328ms May 12 12:11:15.368: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:11:15.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6843" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:27.520 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":255,"skipped":4224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:11:15.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 12:11:15.586: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60ccdaed-aefb-4cb3-ac97-444dd57350fd" in namespace "downward-api-3110" to be "Succeeded or Failed" May 12 12:11:15.595: INFO: Pod "downwardapi-volume-60ccdaed-aefb-4cb3-ac97-444dd57350fd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.211374ms May 12 12:11:17.788: INFO: Pod "downwardapi-volume-60ccdaed-aefb-4cb3-ac97-444dd57350fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202282624s May 12 12:11:19.793: INFO: Pod "downwardapi-volume-60ccdaed-aefb-4cb3-ac97-444dd57350fd": Phase="Running", Reason="", readiness=true. Elapsed: 4.206833125s May 12 12:11:21.796: INFO: Pod "downwardapi-volume-60ccdaed-aefb-4cb3-ac97-444dd57350fd": Phase="Running", Reason="", readiness=true. Elapsed: 6.210621002s May 12 12:11:23.801: INFO: Pod "downwardapi-volume-60ccdaed-aefb-4cb3-ac97-444dd57350fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.21549786s STEP: Saw pod success May 12 12:11:23.801: INFO: Pod "downwardapi-volume-60ccdaed-aefb-4cb3-ac97-444dd57350fd" satisfied condition "Succeeded or Failed" May 12 12:11:23.803: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-60ccdaed-aefb-4cb3-ac97-444dd57350fd container client-container: STEP: delete the pod May 12 12:11:23.935: INFO: Waiting for pod downwardapi-volume-60ccdaed-aefb-4cb3-ac97-444dd57350fd to disappear May 12 12:11:23.975: INFO: Pod downwardapi-volume-60ccdaed-aefb-4cb3-ac97-444dd57350fd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:11:23.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3110" for this suite. • [SLOW TEST:8.518 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":256,"skipped":4269,"failed":0} SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:11:23.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-5333 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5333 STEP: Deleting pre-stop pod May 12 12:11:53.366: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:11:53.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5333" for this suite. • [SLOW TEST:29.820 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":257,"skipped":4271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:11:53.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-a8da4079-93e3-4cbe-9be8-3b670cccee05 in namespace container-probe-163 May 12 12:12:00.336: INFO: Started pod liveness-a8da4079-93e3-4cbe-9be8-3b670cccee05 in namespace container-probe-163 STEP: checking the pod's current state and verifying that restartCount is present May 12 12:12:00.338: INFO: Initial restart count of pod liveness-a8da4079-93e3-4cbe-9be8-3b670cccee05 is 0 May 12 12:12:21.279: INFO: Restart count of pod container-probe-163/liveness-a8da4079-93e3-4cbe-9be8-3b670cccee05 is now 1 (20.940340242s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:12:21.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-163" for this suite. • [SLOW TEST:27.520 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":258,"skipped":4303,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:12:21.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 12:12:21.411: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6e6b0e7-bb1c-4b96-8789-e74ad9caf1c6" in namespace "projected-7059" to be "Succeeded or Failed" May 12 12:12:21.442: INFO: Pod "downwardapi-volume-c6e6b0e7-bb1c-4b96-8789-e74ad9caf1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.878535ms May 12 12:12:23.445: INFO: Pod "downwardapi-volume-c6e6b0e7-bb1c-4b96-8789-e74ad9caf1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034524594s May 12 12:12:25.448: INFO: Pod "downwardapi-volume-c6e6b0e7-bb1c-4b96-8789-e74ad9caf1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037172254s May 12 12:12:27.451: INFO: Pod "downwardapi-volume-c6e6b0e7-bb1c-4b96-8789-e74ad9caf1c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040063935s STEP: Saw pod success May 12 12:12:27.451: INFO: Pod "downwardapi-volume-c6e6b0e7-bb1c-4b96-8789-e74ad9caf1c6" satisfied condition "Succeeded or Failed" May 12 12:12:27.453: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c6e6b0e7-bb1c-4b96-8789-e74ad9caf1c6 container client-container: STEP: delete the pod May 12 12:12:27.531: INFO: Waiting for pod downwardapi-volume-c6e6b0e7-bb1c-4b96-8789-e74ad9caf1c6 to disappear May 12 12:12:27.536: INFO: Pod downwardapi-volume-c6e6b0e7-bb1c-4b96-8789-e74ad9caf1c6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:12:27.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7059" for this suite. • [SLOW TEST:6.225 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":259,"skipped":4311,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:12:27.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 12:12:27.636: INFO: Creating ReplicaSet my-hostname-basic-ffc06400-248e-4756-a6f9-9de514272f43 May 12 12:12:27.674: INFO: Pod name my-hostname-basic-ffc06400-248e-4756-a6f9-9de514272f43: Found 0 pods out of 1 May 12 12:12:32.861: INFO: Pod name my-hostname-basic-ffc06400-248e-4756-a6f9-9de514272f43: Found 1 pods out of 1 May 12 12:12:32.861: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ffc06400-248e-4756-a6f9-9de514272f43" is running May 12 12:12:32.863: INFO: Pod "my-hostname-basic-ffc06400-248e-4756-a6f9-9de514272f43-mnf5w" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:12:27 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:12:32 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:12:32 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 12:12:27 +0000 UTC Reason: Message:}]) May 12 12:12:32.864: INFO: Trying to dial the pod May 12 12:12:37.908: INFO: Controller my-hostname-basic-ffc06400-248e-4756-a6f9-9de514272f43: Got expected result from replica 1 [my-hostname-basic-ffc06400-248e-4756-a6f9-9de514272f43-mnf5w]: "my-hostname-basic-ffc06400-248e-4756-a6f9-9de514272f43-mnf5w", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:12:37.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3885" for this suite. • [SLOW TEST:10.367 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":260,"skipped":4332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:12:37.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:12:38.529: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:12:40.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882358, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882358, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882358, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882358, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:12:44.028: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:12:56.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1708" for this suite. STEP: Destroying namespace "webhook-1708-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.084 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":261,"skipped":4355,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:12:57.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9703 STEP: creating service affinity-nodeport in namespace services-9703 STEP: creating replication controller affinity-nodeport in namespace services-9703 I0512 12:12:57.550168 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-9703, replica count: 3 I0512 12:13:00.600512 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 12:13:03.600740 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 12:13:06.601011 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 12:13:06.617: INFO: Creating new exec pod May 12 12:13:13.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9703 execpod-affinityqw2rj -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 12 12:13:14.543: INFO: stderr: "I0512 12:13:14.426885 4124 log.go:172] (0xc00003a160) (0xc00024c500) Create stream\nI0512 12:13:14.426942 4124 log.go:172] (0xc00003a160) (0xc00024c500) Stream added, broadcasting: 1\nI0512 12:13:14.429284 4124 log.go:172] (0xc00003a160) Reply frame received for 1\nI0512 12:13:14.429310 4124 log.go:172] (0xc00003a160) (0xc0006bef00) Create stream\nI0512 12:13:14.429320 4124 log.go:172] (0xc00003a160) (0xc0006bef00) Stream added, broadcasting: 3\nI0512 12:13:14.430412 4124 log.go:172] (0xc00003a160) Reply frame received for 3\nI0512 12:13:14.430431 4124 log.go:172] (0xc00003a160) (0xc000682aa0) Create stream\nI0512 12:13:14.430437 4124 log.go:172] (0xc00003a160) (0xc000682aa0) Stream added, broadcasting: 5\nI0512 12:13:14.431165 4124 log.go:172] (0xc00003a160) Reply frame received for 5\nI0512 12:13:14.537458 4124 log.go:172] (0xc00003a160) Data frame received for 5\nI0512 12:13:14.537485 4124 log.go:172] (0xc000682aa0) (5) Data frame handling\nI0512 12:13:14.537499 4124 log.go:172] (0xc000682aa0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0512 12:13:14.537667 4124 log.go:172] (0xc00003a160) Data frame received for 5\nI0512 12:13:14.537696 4124 log.go:172] (0xc000682aa0) (5) Data frame handling\nI0512 12:13:14.537717 4124 log.go:172] (0xc000682aa0) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0512 12:13:14.537846 4124 log.go:172] (0xc00003a160) Data frame received for 3\nI0512 12:13:14.537855 4124 log.go:172] (0xc0006bef00) (3) Data frame handling\nI0512 12:13:14.537947 4124 log.go:172] (0xc00003a160) Data frame received for 5\nI0512 12:13:14.537982 4124 log.go:172] (0xc000682aa0) (5) Data frame handling\nI0512 12:13:14.539371 4124 log.go:172] (0xc00003a160) Data frame received for 1\nI0512 12:13:14.539391 4124 log.go:172] (0xc00024c500) (1) Data frame handling\nI0512 12:13:14.539400 4124 log.go:172] (0xc00024c500) (1) Data frame sent\nI0512 12:13:14.539420 4124 log.go:172] (0xc00003a160) (0xc00024c500) Stream removed, broadcasting: 1\nI0512 12:13:14.539462 4124 log.go:172] (0xc00003a160) Go away received\nI0512 12:13:14.539720 4124 log.go:172] (0xc00003a160) (0xc00024c500) Stream removed, broadcasting: 1\nI0512 12:13:14.539745 4124 log.go:172] (0xc00003a160) (0xc0006bef00) Stream removed, broadcasting: 3\nI0512 12:13:14.539750 4124 log.go:172] (0xc00003a160) (0xc000682aa0) Stream removed, broadcasting: 5\n" May 12 12:13:14.543: INFO: stdout: "" May 12 12:13:14.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9703 execpod-affinityqw2rj -- /bin/sh -x -c nc -zv -t -w 2 10.101.3.167 80' May 12 12:13:14.770: INFO: stderr: "I0512 12:13:14.710420 4145 log.go:172] (0xc0009a00b0) (0xc000639360) Create stream\nI0512 12:13:14.710497 4145 log.go:172] (0xc0009a00b0) (0xc000639360) Stream added, broadcasting: 1\nI0512 12:13:14.713068 4145 log.go:172] (0xc0009a00b0) Reply frame received for 1\nI0512 12:13:14.713099 4145 log.go:172] (0xc0009a00b0) (0xc0005863c0) Create stream\nI0512 12:13:14.713249 4145 log.go:172] (0xc0009a00b0) (0xc0005863c0) Stream added, broadcasting: 3\nI0512 12:13:14.714008 4145 log.go:172] (0xc0009a00b0) Reply frame received for 3\nI0512 12:13:14.714047 4145 log.go:172] (0xc0009a00b0) (0xc00050e0a0) Create stream\nI0512 12:13:14.714071 4145 log.go:172] (0xc0009a00b0) (0xc00050e0a0) Stream added, broadcasting: 5\nI0512 12:13:14.714785 4145 log.go:172] (0xc0009a00b0) Reply frame received for 5\nI0512 12:13:14.764961 4145 log.go:172] (0xc0009a00b0) Data frame received for 3\nI0512 12:13:14.764979 4145 log.go:172] (0xc0005863c0) (3) Data frame handling\nI0512 12:13:14.765052 4145 log.go:172] (0xc0009a00b0) Data frame received for 5\nI0512 12:13:14.765061 4145 log.go:172] (0xc00050e0a0) (5) Data frame handling\nI0512 12:13:14.765069 4145 log.go:172] (0xc00050e0a0) (5) Data frame sent\nI0512 12:13:14.765073 4145 log.go:172] (0xc0009a00b0) Data frame received for 5\nI0512 12:13:14.765077 4145 log.go:172] (0xc00050e0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.3.167 80\nConnection to 10.101.3.167 80 port [tcp/http] succeeded!\nI0512 12:13:14.766517 4145 log.go:172] (0xc0009a00b0) Data frame received for 1\nI0512 12:13:14.766561 4145 log.go:172] (0xc000639360) (1) Data frame handling\nI0512 12:13:14.766578 4145 log.go:172] (0xc000639360) (1) Data frame sent\nI0512 12:13:14.766713 4145 log.go:172] (0xc0009a00b0) (0xc000639360) Stream removed, broadcasting: 1\nI0512 12:13:14.766753 4145 log.go:172] (0xc0009a00b0) Go away received\nI0512 12:13:14.767029 4145 log.go:172] (0xc0009a00b0) (0xc000639360) Stream removed, broadcasting: 1\nI0512 12:13:14.767041 4145 log.go:172] (0xc0009a00b0) (0xc0005863c0) Stream removed, broadcasting: 3\nI0512 12:13:14.767048 4145 log.go:172] (0xc0009a00b0) (0xc00050e0a0) Stream removed, broadcasting: 5\n" May 12 12:13:14.770: INFO: stdout: "" May 12 12:13:14.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9703 execpod-affinityqw2rj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31654' May 12 12:13:14.950: INFO: stderr: "I0512 12:13:14.898114 4166 log.go:172] (0xc00003b970) (0xc000832f00) Create stream\nI0512 12:13:14.898153 4166 log.go:172] (0xc00003b970) (0xc000832f00) Stream added, broadcasting: 1\nI0512 12:13:14.899930 4166 log.go:172] (0xc00003b970) Reply frame received for 1\nI0512 12:13:14.899996 4166 log.go:172] (0xc00003b970) (0xc00099c0a0) Create stream\nI0512 12:13:14.900023 4166 log.go:172] (0xc00003b970) (0xc00099c0a0) Stream added, broadcasting: 3\nI0512 12:13:14.900776 4166 log.go:172] (0xc00003b970) Reply frame received for 3\nI0512 12:13:14.900797 4166 log.go:172] (0xc00003b970) (0xc00060bcc0) Create stream\nI0512 12:13:14.900808 4166 log.go:172] (0xc00003b970) (0xc00060bcc0) Stream added, broadcasting: 5\nI0512 12:13:14.901455 4166 log.go:172] (0xc00003b970) Reply frame received for 5\nI0512 12:13:14.943378 4166 log.go:172] (0xc00003b970) Data frame received for 3\nI0512 12:13:14.943446 4166 log.go:172] (0xc00099c0a0) (3) Data frame handling\nI0512 12:13:14.943468 4166 log.go:172] (0xc00003b970) Data frame received for 5\nI0512 12:13:14.943481 4166 log.go:172] (0xc00060bcc0) (5) Data frame handling\nI0512 12:13:14.943491 4166 log.go:172] (0xc00060bcc0) (5) Data frame sent\nI0512 12:13:14.943507 4166 log.go:172] (0xc00003b970) Data frame received for 5\nI0512 12:13:14.943514 4166 log.go:172] (0xc00060bcc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31654\nConnection to 172.17.0.13 31654 port [tcp/31654] succeeded!\nI0512 12:13:14.944763 4166 log.go:172] (0xc00003b970) Data frame received for 1\nI0512 12:13:14.944792 4166 log.go:172] (0xc000832f00) (1) Data frame handling\nI0512 12:13:14.944826 4166 log.go:172] (0xc000832f00) (1) Data frame sent\nI0512 12:13:14.944844 4166 log.go:172] (0xc00003b970) (0xc000832f00) Stream removed, broadcasting: 1\nI0512 12:13:14.944869 4166 log.go:172] (0xc00003b970) Go away received\nI0512 12:13:14.945357 4166 log.go:172] (0xc00003b970) (0xc000832f00) Stream removed, broadcasting: 1\nI0512 12:13:14.945387 4166 log.go:172] (0xc00003b970) (0xc00099c0a0) Stream removed, broadcasting: 3\nI0512 12:13:14.945400 4166 log.go:172] (0xc00003b970) (0xc00060bcc0) Stream removed, broadcasting: 5\n" May 12 12:13:14.950: INFO: stdout: "" May 12 12:13:14.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9703 execpod-affinityqw2rj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31654' May 12 12:13:15.167: INFO: stderr: "I0512 12:13:15.074768 4184 log.go:172] (0xc000ab0f20) (0xc0003b5f40) Create stream\nI0512 12:13:15.074812 4184 log.go:172] (0xc000ab0f20) (0xc0003b5f40) Stream added, broadcasting: 1\nI0512 12:13:15.076769 4184 log.go:172] (0xc000ab0f20) Reply frame received for 1\nI0512 12:13:15.076802 4184 log.go:172] (0xc000ab0f20) (0xc00023c320) Create stream\nI0512 12:13:15.076811 4184 log.go:172] (0xc000ab0f20) (0xc00023c320) Stream added, broadcasting: 3\nI0512 12:13:15.077705 4184 log.go:172] (0xc000ab0f20) Reply frame received for 3\nI0512 12:13:15.077737 4184 log.go:172] (0xc000ab0f20) (0xc00013b5e0) Create stream\nI0512 12:13:15.077751 4184 log.go:172] (0xc000ab0f20) (0xc00013b5e0) Stream added, broadcasting: 5\nI0512 12:13:15.078352 4184 log.go:172] (0xc000ab0f20) Reply frame received for 5\nI0512 12:13:15.162606 4184 log.go:172] (0xc000ab0f20) Data frame received for 5\nI0512 12:13:15.162627 4184 log.go:172] (0xc00013b5e0) (5) Data frame handling\nI0512 12:13:15.162639 4184 log.go:172] (0xc00013b5e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31654\nConnection to 172.17.0.12 31654 port [tcp/31654] succeeded!\nI0512 12:13:15.162652 4184 log.go:172] (0xc000ab0f20) Data frame received for 3\nI0512 12:13:15.162656 4184 log.go:172] (0xc00023c320) (3) Data frame handling\nI0512 12:13:15.162761 4184 log.go:172] (0xc000ab0f20) Data frame received for 5\nI0512 12:13:15.162779 4184 log.go:172] (0xc00013b5e0) (5) Data frame handling\nI0512 12:13:15.163720 4184 log.go:172] (0xc000ab0f20) Data frame received for 1\nI0512 12:13:15.163740 4184 log.go:172] (0xc0003b5f40) (1) Data frame handling\nI0512 12:13:15.163753 4184 log.go:172] (0xc0003b5f40) (1) Data frame sent\nI0512 12:13:15.163766 4184 log.go:172] (0xc000ab0f20) (0xc0003b5f40) Stream removed, broadcasting: 1\nI0512 12:13:15.163778 4184 log.go:172] (0xc000ab0f20) Go away received\nI0512 12:13:15.164075 4184 log.go:172] (0xc000ab0f20) (0xc0003b5f40) Stream removed, broadcasting: 1\nI0512 12:13:15.164087 4184 log.go:172] (0xc000ab0f20) (0xc00023c320) Stream removed, broadcasting: 3\nI0512 12:13:15.164092 4184 log.go:172] (0xc000ab0f20) (0xc00013b5e0) Stream removed, broadcasting: 5\n" May 12 12:13:15.167: INFO: stdout: "" May 12 12:13:15.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9703 execpod-affinityqw2rj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31654/ ; done' May 12 12:13:15.431: INFO: stderr: "I0512 12:13:15.291254 4203 log.go:172] (0xc000a671e0) (0xc000827cc0) Create stream\nI0512 12:13:15.291344 4203 log.go:172] (0xc000a671e0) (0xc000827cc0) Stream added, broadcasting: 1\nI0512 12:13:15.295077 4203 log.go:172] (0xc000a671e0) Reply frame received for 1\nI0512 12:13:15.295116 4203 log.go:172] (0xc000a671e0) (0xc0006c65a0) Create stream\nI0512 12:13:15.295126 4203 log.go:172] (0xc000a671e0) (0xc0006c65a0) Stream added, broadcasting: 3\nI0512 12:13:15.295768 4203 log.go:172] (0xc000a671e0) Reply frame received for 3\nI0512 12:13:15.295814 4203 log.go:172] (0xc000a671e0) (0xc0006a4280) Create stream\nI0512 12:13:15.295832 4203 log.go:172] (0xc000a671e0) (0xc0006a4280) Stream added, broadcasting: 5\nI0512 12:13:15.296523 4203 log.go:172] (0xc000a671e0) Reply frame received for 5\nI0512 12:13:15.348697 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.348731 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.348760 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.348771 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\nI0512 12:13:15.348780 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.348787 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.354704 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.354719 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.354727 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.355053 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.355072 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.355080 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.355089 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.355098 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.355105 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.359292 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.359311 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.359328 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.359663 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.359686 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.359699 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.359716 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.359723 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.359730 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.363872 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.363906 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.363927 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.364254 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.364271 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.364286 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.364294 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\nI0512 12:13:15.364298 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.364305 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.364315 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\nI0512 12:13:15.364321 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.364326 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.368904 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.368937 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.368952 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.369455 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.369483 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.369506 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.369537 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.369550 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.369560 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.374224 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.374251 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.374277 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.374548 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.374574 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.374591 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.374610 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.374619 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.374636 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.378094 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.378111 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.378132 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.378428 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.378447 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.378460 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.378554 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.378561 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.378571 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.382903 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.382921 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.382937 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.383362 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.383372 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.383381 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.383465 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.383482 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.383497 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.388721 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.388758 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.388778 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.389302 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.389325 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.389338 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\nI0512 12:13:15.389348 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.389367 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.389387 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\nI0512 12:13:15.389404 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.389426 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.389449 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.393897 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.393914 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.393927 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.394370 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.394396 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.394411 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.394440 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.394458 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.394478 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.399710 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.399729 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.399747 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.400221 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.400241 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.400260 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.400307 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.400329 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.400360 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.405069 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.405093 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.405287 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.405821 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.405855 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.405868 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.405888 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.405903 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.405925 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.411232 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.411252 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.411265 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.411671 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.411688 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.411696 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.411730 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.411756 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.411771 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.415211 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.415221 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.415229 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.415441 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.415462 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.415479 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.415491 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.415501 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.415510 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.418248 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.418263 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.418283 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.418546 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.418559 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.418573 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.418581 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.418599 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.418606 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.421399 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.421416 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.421429 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.421689 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.421700 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.421707 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.421718 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.421723 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.421731 4203 log.go:172] (0xc0006a4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31654/\nI0512 12:13:15.425989 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.426003 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.426018 4203 log.go:172] (0xc0006c65a0) (3) Data frame sent\nI0512 12:13:15.426510 4203 log.go:172] (0xc000a671e0) Data frame received for 5\nI0512 12:13:15.426521 4203 log.go:172] (0xc0006a4280) (5) Data frame handling\nI0512 12:13:15.426535 4203 log.go:172] (0xc000a671e0) Data frame received for 3\nI0512 12:13:15.426545 4203 log.go:172] (0xc0006c65a0) (3) Data frame handling\nI0512 12:13:15.427913 4203 log.go:172] (0xc000a671e0) Data frame received for 1\nI0512 12:13:15.427926 4203 log.go:172] (0xc000827cc0) (1) Data frame handling\nI0512 12:13:15.427934 4203 log.go:172] (0xc000827cc0) (1) Data frame sent\nI0512 12:13:15.427946 4203 log.go:172] (0xc000a671e0) (0xc000827cc0) Stream removed, broadcasting: 1\nI0512 12:13:15.427994 4203 log.go:172] (0xc000a671e0) Go away received\nI0512 12:13:15.428172 4203 log.go:172] (0xc000a671e0) (0xc000827cc0) Stream removed, broadcasting: 1\nI0512 12:13:15.428182 4203 log.go:172] (0xc000a671e0) (0xc0006c65a0) Stream removed, broadcasting: 3\nI0512 12:13:15.428189 4203 log.go:172] (0xc000a671e0) (0xc0006a4280) Stream removed, broadcasting: 5\n" May 12 12:13:15.432: INFO: stdout: "\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb\naffinity-nodeport-n8sqb" May 12 12:13:15.432: INFO: Received response from host: May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Received response from host: affinity-nodeport-n8sqb May 12 12:13:15.432: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-9703, will wait for the garbage collector to delete the pods May 12 12:13:15.804: INFO: Deleting ReplicationController affinity-nodeport took: 211.900863ms May 12 12:13:16.104: INFO: Terminating ReplicationController affinity-nodeport pods took: 300.213329ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:13:25.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9703" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.368 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":262,"skipped":4366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:13:25.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 12:13:25.497: INFO: Waiting up to 5m0s for pod "pod-70e78771-b1c9-47c3-8c9f-caadae717c6f" in namespace "emptydir-9004" to be "Succeeded or Failed" May 12 12:13:25.527: INFO: Pod "pod-70e78771-b1c9-47c3-8c9f-caadae717c6f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.012405ms May 12 12:13:27.609: INFO: Pod "pod-70e78771-b1c9-47c3-8c9f-caadae717c6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112452214s May 12 12:13:29.612: INFO: Pod "pod-70e78771-b1c9-47c3-8c9f-caadae717c6f": Phase="Running", Reason="", readiness=true. Elapsed: 4.115696057s May 12 12:13:31.616: INFO: Pod "pod-70e78771-b1c9-47c3-8c9f-caadae717c6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.119319995s STEP: Saw pod success May 12 12:13:31.616: INFO: Pod "pod-70e78771-b1c9-47c3-8c9f-caadae717c6f" satisfied condition "Succeeded or Failed" May 12 12:13:31.619: INFO: Trying to get logs from node latest-worker pod pod-70e78771-b1c9-47c3-8c9f-caadae717c6f container test-container: STEP: delete the pod May 12 12:13:31.767: INFO: Waiting for pod pod-70e78771-b1c9-47c3-8c9f-caadae717c6f to disappear May 12 12:13:31.825: INFO: Pod pod-70e78771-b1c9-47c3-8c9f-caadae717c6f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:13:31.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9004" for this suite. • [SLOW TEST:6.562 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":263,"skipped":4390,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:13:31.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 12 12:13:32.149: INFO: Waiting up to 5m0s for pod "downward-api-a63b651a-06f6-4edf-8085-0bea43c9bf0d" in namespace "downward-api-3688" to be "Succeeded or Failed" May 12 12:13:32.154: INFO: Pod "downward-api-a63b651a-06f6-4edf-8085-0bea43c9bf0d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.529894ms May 12 12:13:34.220: INFO: Pod "downward-api-a63b651a-06f6-4edf-8085-0bea43c9bf0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071272053s May 12 12:13:36.223: INFO: Pod "downward-api-a63b651a-06f6-4edf-8085-0bea43c9bf0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074741671s STEP: Saw pod success May 12 12:13:36.223: INFO: Pod "downward-api-a63b651a-06f6-4edf-8085-0bea43c9bf0d" satisfied condition "Succeeded or Failed" May 12 12:13:36.226: INFO: Trying to get logs from node latest-worker pod downward-api-a63b651a-06f6-4edf-8085-0bea43c9bf0d container dapi-container: STEP: delete the pod May 12 12:13:36.503: INFO: Waiting for pod downward-api-a63b651a-06f6-4edf-8085-0bea43c9bf0d to disappear May 12 12:13:36.513: INFO: Pod downward-api-a63b651a-06f6-4edf-8085-0bea43c9bf0d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:13:36.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3688" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":264,"skipped":4408,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:13:36.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 12 12:13:36.636: INFO: Waiting up to 5m0s for pod "var-expansion-d7eec297-1ff3-45fd-89a1-c813b721a11b" in namespace "var-expansion-5871" to be "Succeeded or Failed" May 12 12:13:36.671: INFO: Pod "var-expansion-d7eec297-1ff3-45fd-89a1-c813b721a11b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.328112ms May 12 12:13:38.822: INFO: Pod "var-expansion-d7eec297-1ff3-45fd-89a1-c813b721a11b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186108146s May 12 12:13:40.826: INFO: Pod "var-expansion-d7eec297-1ff3-45fd-89a1-c813b721a11b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189936096s STEP: Saw pod success May 12 12:13:40.826: INFO: Pod "var-expansion-d7eec297-1ff3-45fd-89a1-c813b721a11b" satisfied condition "Succeeded or Failed" May 12 12:13:40.829: INFO: Trying to get logs from node latest-worker pod var-expansion-d7eec297-1ff3-45fd-89a1-c813b721a11b container dapi-container: STEP: delete the pod May 12 12:13:40.904: INFO: Waiting for pod var-expansion-d7eec297-1ff3-45fd-89a1-c813b721a11b to disappear May 12 12:13:40.910: INFO: Pod var-expansion-d7eec297-1ff3-45fd-89a1-c813b721a11b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:13:40.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5871" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":265,"skipped":4446,"failed":0} S ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:13:40.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-34a61497-a533-4fa2-b2b8-0bbc61539a92 STEP: Creating configMap with name cm-test-opt-upd-98312188-1a07-4708-a960-7a69b0f152c4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-34a61497-a533-4fa2-b2b8-0bbc61539a92 STEP: Updating configmap cm-test-opt-upd-98312188-1a07-4708-a960-7a69b0f152c4 STEP: Creating configMap with name cm-test-opt-create-b78ab0f2-6e6a-443a-882a-59782b1e8fef STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:13:51.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6105" for this suite. • [SLOW TEST:10.458 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":266,"skipped":4447,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:13:51.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 12 12:13:51.499: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:13:51.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7772" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":267,"skipped":4449,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:13:51.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-0c2374e4-33e5-419d-a398-ecfb4b241d2d STEP: Creating a pod to test consume secrets May 12 12:13:51.812: INFO: Waiting up to 5m0s for pod "pod-secrets-e9c0eeda-4320-4940-a8f0-3b8c38dd8a39" in namespace "secrets-2088" to be "Succeeded or Failed" May 12 12:13:51.815: INFO: Pod "pod-secrets-e9c0eeda-4320-4940-a8f0-3b8c38dd8a39": Phase="Pending", Reason="", readiness=false. Elapsed: 3.431054ms May 12 12:13:54.100: INFO: Pod "pod-secrets-e9c0eeda-4320-4940-a8f0-3b8c38dd8a39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288462055s May 12 12:13:56.118: INFO: Pod "pod-secrets-e9c0eeda-4320-4940-a8f0-3b8c38dd8a39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.305657905s STEP: Saw pod success May 12 12:13:56.118: INFO: Pod "pod-secrets-e9c0eeda-4320-4940-a8f0-3b8c38dd8a39" satisfied condition "Succeeded or Failed" May 12 12:13:56.119: INFO: Trying to get logs from node latest-worker pod pod-secrets-e9c0eeda-4320-4940-a8f0-3b8c38dd8a39 container secret-volume-test: STEP: delete the pod May 12 12:13:56.575: INFO: Waiting for pod pod-secrets-e9c0eeda-4320-4940-a8f0-3b8c38dd8a39 to disappear May 12 12:13:56.740: INFO: Pod pod-secrets-e9c0eeda-4320-4940-a8f0-3b8c38dd8a39 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:13:56.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2088" for this suite. • [SLOW TEST:5.083 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":268,"skipped":4454,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:13:56.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:13:57.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7853" for this suite. STEP: Destroying namespace "nspatchtest-b950159c-7a6c-4aac-b5de-c96971a3697e-7288" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":269,"skipped":4465,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:13:57.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:14:00.208: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:14:02.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882440, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882440, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882440, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882439, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:14:04.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882440, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882440, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882440, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882439, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:14:07.375: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:14:07.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3312" for this suite. STEP: Destroying namespace "webhook-3312-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.035 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":270,"skipped":4479,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:14:07.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-9377 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9377 to expose endpoints map[] May 12 12:14:08.012: INFO: successfully validated that service multi-endpoint-test in namespace services-9377 exposes endpoints map[] (103.001197ms elapsed) STEP: Creating pod pod1 in namespace services-9377 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9377 to expose endpoints map[pod1:[100]] May 12 12:14:11.106: INFO: successfully validated that service multi-endpoint-test in namespace services-9377 exposes endpoints map[pod1:[100]] (3.067230423s elapsed) STEP: Creating pod pod2 in namespace services-9377 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9377 to expose endpoints map[pod1:[100] pod2:[101]] May 12 12:14:15.651: INFO: successfully validated that service multi-endpoint-test in namespace services-9377 exposes endpoints map[pod1:[100] pod2:[101]] (4.541043407s elapsed) STEP: Deleting pod pod1 in namespace services-9377 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9377 to expose endpoints map[pod2:[101]] May 12 12:14:16.710: INFO: successfully validated that service multi-endpoint-test in namespace services-9377 exposes endpoints map[pod2:[101]] (1.055206204s elapsed) STEP: Deleting pod pod2 in namespace services-9377 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9377 to expose endpoints map[] May 12 12:14:18.196: INFO: successfully validated that service multi-endpoint-test in namespace services-9377 exposes endpoints map[] (1.483098827s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:14:18.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9377" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:10.699 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":271,"skipped":4489,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:14:18.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 12 12:14:18.806: INFO: Waiting up to 5m0s for pod "pod-edd95a34-4d63-4190-b39d-c8c2e94a6380" in namespace "emptydir-6842" to be "Succeeded or Failed" May 12 12:14:18.861: INFO: Pod "pod-edd95a34-4d63-4190-b39d-c8c2e94a6380": Phase="Pending", Reason="", readiness=false. Elapsed: 54.694608ms May 12 12:14:20.865: INFO: Pod "pod-edd95a34-4d63-4190-b39d-c8c2e94a6380": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059034892s May 12 12:14:22.869: INFO: Pod "pod-edd95a34-4d63-4190-b39d-c8c2e94a6380": Phase="Running", Reason="", readiness=true. Elapsed: 4.062513575s May 12 12:14:25.028: INFO: Pod "pod-edd95a34-4d63-4190-b39d-c8c2e94a6380": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.22185021s STEP: Saw pod success May 12 12:14:25.028: INFO: Pod "pod-edd95a34-4d63-4190-b39d-c8c2e94a6380" satisfied condition "Succeeded or Failed" May 12 12:14:25.123: INFO: Trying to get logs from node latest-worker2 pod pod-edd95a34-4d63-4190-b39d-c8c2e94a6380 container test-container: STEP: delete the pod May 12 12:14:25.318: INFO: Waiting for pod pod-edd95a34-4d63-4190-b39d-c8c2e94a6380 to disappear May 12 12:14:25.429: INFO: Pod pod-edd95a34-4d63-4190-b39d-c8c2e94a6380 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:14:25.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6842" for this suite. • [SLOW TEST:6.981 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":272,"skipped":4498,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:14:25.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 12 12:14:30.139: INFO: Successfully updated pod "labelsupdate17d16651-d408-4db5-ba03-13baad42f4ca" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:14:34.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9255" for this suite. • [SLOW TEST:8.750 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":273,"skipped":4512,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:14:34.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 12 12:14:34.833: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:14:42.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5521" for this suite. • [SLOW TEST:8.222 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":274,"skipped":4528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:14:42.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 12:14:42.627: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 12 12:14:47.631: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 12:14:47.631: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 12 12:14:47.702: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8996 /apis/apps/v1/namespaces/deployment-8996/deployments/test-cleanup-deployment 9eea2356-d06d-4538-9406-42e89971a66b 3811885 1 2020-05-12 12:14:47 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-12 12:14:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005539798 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 12 12:14:47.739: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-8996 /apis/apps/v1/namespaces/deployment-8996/replicasets/test-cleanup-deployment-6688745694 d34de185-2f32-47a0-a3bc-e3272eef6530 3811887 1 2020-05-12 12:14:47 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 9eea2356-d06d-4538-9406-42e89971a66b 0xc005539f27 0xc005539f28}] [] [{kube-controller-manager Update apps/v1 2020-05-12 12:14:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9eea2356-d06d-4538-9406-42e89971a66b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028ba008 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 12:14:47.739: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 12 12:14:47.739: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-8996 /apis/apps/v1/namespaces/deployment-8996/replicasets/test-cleanup-controller eb960d57-9e2d-4999-b5ed-d9c47df12a54 3811886 1 2020-05-12 12:14:42 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 9eea2356-d06d-4538-9406-42e89971a66b 0xc005539e17 0xc005539e18}] [] [{e2e.test Update apps/v1 2020-05-12 12:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-12 12:14:47 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"9eea2356-d06d-4538-9406-42e89971a66b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005539eb8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 12 12:14:47.802: INFO: Pod "test-cleanup-controller-t6xxm" is available: &Pod{ObjectMeta:{test-cleanup-controller-t6xxm test-cleanup-controller- deployment-8996 /api/v1/namespaces/deployment-8996/pods/test-cleanup-controller-t6xxm edafa785-3c4d-40a7-867d-13e14ed5f1b4 3811869 0 2020-05-12 12:14:42 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller eb960d57-9e2d-4999-b5ed-d9c47df12a54 0xc0029a49d7 0xc0029a49d8}] [] [{kube-controller-manager Update v1 2020-05-12 12:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eb960d57-9e2d-4999-b5ed-d9c47df12a54\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 12:14:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.160\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-49qhs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-49qhs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-49qhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:14:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:14:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:14:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:14:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.160,StartTime:2020-05-12 12:14:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 12:14:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://20d7d34f31f0d4d63f161a2dad24b3c4c501ccdaa39bc3b10c43e7431bd4b27b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.160,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 12:14:47.802: INFO: Pod "test-cleanup-deployment-6688745694-2hg2t" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-2hg2t test-cleanup-deployment-6688745694- deployment-8996 /api/v1/namespaces/deployment-8996/pods/test-cleanup-deployment-6688745694-2hg2t 9dc1ca0b-dd9d-4e2c-bfb3-9f7ee7879166 3811893 0 2020-05-12 12:14:47 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 d34de185-2f32-47a0-a3bc-e3272eef6530 0xc0029a4b97 0xc0029a4b98}] [] [{kube-controller-manager Update v1 2020-05-12 12:14:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d34de185-2f32-47a0-a3bc-e3272eef6530\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-49qhs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-49qhs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-49qhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:14:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:14:47.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8996" for this suite. • [SLOW TEST:5.458 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":275,"skipped":4556,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:14:47.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0512 12:14:48.848411 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 12:14:48.848: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:14:48.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8642" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":276,"skipped":4563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:14:48.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 12 12:14:48.907: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:15:05.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-966" for this suite. • [SLOW TEST:16.280 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":277,"skipped":4595,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:15:05.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 12 12:15:10.057: INFO: Successfully updated pod "annotationupdate821217ed-8564-4770-a7cb-d341741b1e6d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:15:12.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1961" for this suite. • [SLOW TEST:6.993 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":278,"skipped":4623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:15:12.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6556 May 12 12:15:16.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6556 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 12 12:15:16.978: INFO: stderr: "I0512 12:15:16.888561 4242 log.go:172] (0xc00003a420) (0xc000234e60) Create stream\nI0512 12:15:16.888630 4242 log.go:172] (0xc00003a420) (0xc000234e60) Stream added, broadcasting: 1\nI0512 12:15:16.890534 4242 log.go:172] (0xc00003a420) Reply frame received for 1\nI0512 12:15:16.890585 4242 log.go:172] (0xc00003a420) (0xc000768820) Create stream\nI0512 12:15:16.890598 4242 log.go:172] (0xc00003a420) (0xc000768820) Stream added, broadcasting: 3\nI0512 12:15:16.891387 4242 log.go:172] (0xc00003a420) Reply frame received for 3\nI0512 12:15:16.891414 4242 log.go:172] (0xc00003a420) (0xc0005ee320) Create stream\nI0512 12:15:16.891422 4242 log.go:172] (0xc00003a420) (0xc0005ee320) Stream added, broadcasting: 5\nI0512 12:15:16.892209 4242 log.go:172] (0xc00003a420) Reply frame received for 5\nI0512 12:15:16.970665 4242 log.go:172] (0xc00003a420) Data frame received for 5\nI0512 12:15:16.970682 4242 log.go:172] (0xc0005ee320) (5) Data frame handling\nI0512 12:15:16.970691 4242 log.go:172] (0xc0005ee320) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0512 12:15:16.973787 4242 log.go:172] (0xc00003a420) Data frame received for 3\nI0512 12:15:16.973804 4242 log.go:172] (0xc000768820) (3) Data frame handling\nI0512 12:15:16.973815 4242 log.go:172] (0xc000768820) (3) Data frame sent\nI0512 12:15:16.974291 4242 log.go:172] (0xc00003a420) Data frame received for 3\nI0512 12:15:16.974308 4242 log.go:172] (0xc000768820) (3) Data frame handling\nI0512 12:15:16.974321 4242 log.go:172] (0xc00003a420) Data frame received for 5\nI0512 12:15:16.974332 4242 log.go:172] (0xc0005ee320) (5) Data frame handling\nI0512 12:15:16.975412 4242 log.go:172] (0xc00003a420) Data frame received for 1\nI0512 12:15:16.975439 4242 log.go:172] (0xc000234e60) (1) Data frame handling\nI0512 12:15:16.975455 4242 log.go:172] (0xc000234e60) (1) Data frame sent\nI0512 12:15:16.975470 4242 log.go:172] (0xc00003a420) (0xc000234e60) Stream removed, broadcasting: 1\nI0512 12:15:16.975481 4242 log.go:172] (0xc00003a420) Go away received\nI0512 12:15:16.975710 4242 log.go:172] (0xc00003a420) (0xc000234e60) Stream removed, broadcasting: 1\nI0512 12:15:16.975718 4242 log.go:172] (0xc00003a420) (0xc000768820) Stream removed, broadcasting: 3\nI0512 12:15:16.975727 4242 log.go:172] (0xc00003a420) (0xc0005ee320) Stream removed, broadcasting: 5\n" May 12 12:15:16.979: INFO: stdout: "iptables" May 12 12:15:16.979: INFO: proxyMode: iptables May 12 12:15:17.030: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 12:15:17.084: INFO: Pod kube-proxy-mode-detector still exists May 12 12:15:19.084: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 12:15:19.131: INFO: Pod kube-proxy-mode-detector still exists May 12 12:15:21.084: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 12:15:21.101: INFO: Pod kube-proxy-mode-detector still exists May 12 12:15:23.084: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 12:15:23.087: INFO: Pod kube-proxy-mode-detector still exists May 12 12:15:25.084: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 12 12:15:25.087: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-6556 STEP: creating replication controller affinity-nodeport-timeout in namespace services-6556 I0512 12:15:25.138700 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6556, replica count: 3 I0512 12:15:28.189086 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 12:15:31.189557 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 12:15:34.189784 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 12:15:34.198: INFO: Creating new exec pod May 12 12:15:41.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6556 execpod-affinityggff5 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 12 12:15:41.726: INFO: stderr: "I0512 12:15:41.637995 4263 log.go:172] (0xc0009b3340) (0xc000aa2280) Create stream\nI0512 12:15:41.638046 4263 log.go:172] (0xc0009b3340) (0xc000aa2280) Stream added, broadcasting: 1\nI0512 12:15:41.641785 4263 log.go:172] (0xc0009b3340) Reply frame received for 1\nI0512 12:15:41.641812 4263 log.go:172] (0xc0009b3340) (0xc000846aa0) Create stream\nI0512 12:15:41.641819 4263 log.go:172] (0xc0009b3340) (0xc000846aa0) Stream added, broadcasting: 3\nI0512 12:15:41.642469 4263 log.go:172] (0xc0009b3340) Reply frame received for 3\nI0512 12:15:41.642515 4263 log.go:172] (0xc0009b3340) (0xc000662280) Create stream\nI0512 12:15:41.642534 4263 log.go:172] (0xc0009b3340) (0xc000662280) Stream added, broadcasting: 5\nI0512 12:15:41.643273 4263 log.go:172] (0xc0009b3340) Reply frame received for 5\nI0512 12:15:41.720057 4263 log.go:172] (0xc0009b3340) Data frame received for 5\nI0512 12:15:41.720076 4263 log.go:172] (0xc000662280) (5) Data frame handling\nI0512 12:15:41.720088 4263 log.go:172] (0xc000662280) (5) Data frame sent\nI0512 12:15:41.720093 4263 log.go:172] (0xc0009b3340) Data frame received for 5\nI0512 12:15:41.720097 4263 log.go:172] (0xc000662280) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0512 12:15:41.720109 4263 log.go:172] (0xc000662280) (5) Data frame sent\nI0512 12:15:41.720256 4263 log.go:172] (0xc0009b3340) Data frame received for 5\nI0512 12:15:41.720279 4263 log.go:172] (0xc0009b3340) Data frame received for 3\nI0512 12:15:41.720307 4263 log.go:172] (0xc000846aa0) (3) Data frame handling\nI0512 12:15:41.720334 4263 log.go:172] (0xc000662280) (5) Data frame handling\nI0512 12:15:41.721905 4263 log.go:172] (0xc0009b3340) Data frame received for 1\nI0512 12:15:41.721920 4263 log.go:172] (0xc000aa2280) (1) Data frame handling\nI0512 12:15:41.721929 4263 log.go:172] (0xc000aa2280) (1) Data frame sent\nI0512 12:15:41.721952 4263 log.go:172] (0xc0009b3340) (0xc000aa2280) Stream removed, broadcasting: 1\nI0512 12:15:41.721967 4263 log.go:172] (0xc0009b3340) Go away received\nI0512 12:15:41.722208 4263 log.go:172] (0xc0009b3340) (0xc000aa2280) Stream removed, broadcasting: 1\nI0512 12:15:41.722222 4263 log.go:172] (0xc0009b3340) (0xc000846aa0) Stream removed, broadcasting: 3\nI0512 12:15:41.722228 4263 log.go:172] (0xc0009b3340) (0xc000662280) Stream removed, broadcasting: 5\n" May 12 12:15:41.726: INFO: stdout: "" May 12 12:15:41.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6556 execpod-affinityggff5 -- /bin/sh -x -c nc -zv -t -w 2 10.98.226.53 80' May 12 12:15:41.937: INFO: stderr: "I0512 12:15:41.872616 4283 log.go:172] (0xc000aab3f0) (0xc0006dfea0) Create stream\nI0512 12:15:41.872666 4283 log.go:172] (0xc000aab3f0) (0xc0006dfea0) Stream added, broadcasting: 1\nI0512 12:15:41.874660 4283 log.go:172] (0xc000aab3f0) Reply frame received for 1\nI0512 12:15:41.874697 4283 log.go:172] (0xc000aab3f0) (0xc0008288c0) Create stream\nI0512 12:15:41.874711 4283 log.go:172] (0xc000aab3f0) (0xc0008288c0) Stream added, broadcasting: 3\nI0512 12:15:41.875464 4283 log.go:172] (0xc000aab3f0) Reply frame received for 3\nI0512 12:15:41.875488 4283 log.go:172] (0xc000aab3f0) (0xc0008301e0) Create stream\nI0512 12:15:41.875496 4283 log.go:172] (0xc000aab3f0) (0xc0008301e0) Stream added, broadcasting: 5\nI0512 12:15:41.876111 4283 log.go:172] (0xc000aab3f0) Reply frame received for 5\nI0512 12:15:41.931424 4283 log.go:172] (0xc000aab3f0) Data frame received for 3\nI0512 12:15:41.931464 4283 log.go:172] (0xc0008288c0) (3) Data frame handling\nI0512 12:15:41.931518 4283 log.go:172] (0xc000aab3f0) Data frame received for 5\nI0512 12:15:41.931538 4283 log.go:172] (0xc0008301e0) (5) Data frame handling\nI0512 12:15:41.931556 4283 log.go:172] (0xc0008301e0) (5) Data frame sent\nI0512 12:15:41.931569 4283 log.go:172] (0xc000aab3f0) Data frame received for 5\nI0512 12:15:41.931582 4283 log.go:172] (0xc0008301e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.226.53 80\nConnection to 10.98.226.53 80 port [tcp/http] succeeded!\nI0512 12:15:41.932931 4283 log.go:172] (0xc000aab3f0) Data frame received for 1\nI0512 12:15:41.932957 4283 log.go:172] (0xc0006dfea0) (1) Data frame handling\nI0512 12:15:41.932986 4283 log.go:172] (0xc0006dfea0) (1) Data frame sent\nI0512 12:15:41.933009 4283 log.go:172] (0xc000aab3f0) (0xc0006dfea0) Stream removed, broadcasting: 1\nI0512 12:15:41.933061 4283 log.go:172] (0xc000aab3f0) Go away received\nI0512 12:15:41.933573 4283 log.go:172] (0xc000aab3f0) (0xc0006dfea0) Stream removed, broadcasting: 1\nI0512 12:15:41.933601 4283 log.go:172] (0xc000aab3f0) (0xc0008288c0) Stream removed, broadcasting: 3\nI0512 12:15:41.933614 4283 log.go:172] (0xc000aab3f0) (0xc0008301e0) Stream removed, broadcasting: 5\n" May 12 12:15:41.937: INFO: stdout: "" May 12 12:15:41.937: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6556 execpod-affinityggff5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30967' May 12 12:15:42.246: INFO: stderr: "I0512 12:15:42.154345 4303 log.go:172] (0xc0008e0bb0) (0xc0009c83c0) Create stream\nI0512 12:15:42.154412 4303 log.go:172] (0xc0008e0bb0) (0xc0009c83c0) Stream added, broadcasting: 1\nI0512 12:15:42.158931 4303 log.go:172] (0xc0008e0bb0) Reply frame received for 1\nI0512 12:15:42.158970 4303 log.go:172] (0xc0008e0bb0) (0xc0006fbf40) Create stream\nI0512 12:15:42.158982 4303 log.go:172] (0xc0008e0bb0) (0xc0006fbf40) Stream added, broadcasting: 3\nI0512 12:15:42.159736 4303 log.go:172] (0xc0008e0bb0) Reply frame received for 3\nI0512 12:15:42.159760 4303 log.go:172] (0xc0008e0bb0) (0xc000656280) Create stream\nI0512 12:15:42.159770 4303 log.go:172] (0xc0008e0bb0) (0xc000656280) Stream added, broadcasting: 5\nI0512 12:15:42.160434 4303 log.go:172] (0xc0008e0bb0) Reply frame received for 5\nI0512 12:15:42.239336 4303 log.go:172] (0xc0008e0bb0) Data frame received for 3\nI0512 12:15:42.239386 4303 log.go:172] (0xc0006fbf40) (3) Data frame handling\nI0512 12:15:42.239438 4303 log.go:172] (0xc0008e0bb0) Data frame received for 5\nI0512 12:15:42.239656 4303 log.go:172] (0xc000656280) (5) Data frame handling\nI0512 12:15:42.239755 4303 log.go:172] (0xc000656280) (5) Data frame sent\nI0512 12:15:42.239789 4303 log.go:172] (0xc0008e0bb0) Data frame received for 5\nI0512 12:15:42.239804 4303 log.go:172] (0xc000656280) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30967\nConnection to 172.17.0.13 30967 port [tcp/30967] succeeded!\nI0512 12:15:42.241713 4303 log.go:172] (0xc0008e0bb0) Data frame received for 1\nI0512 12:15:42.241744 4303 log.go:172] (0xc0009c83c0) (1) Data frame handling\nI0512 12:15:42.241763 4303 log.go:172] (0xc0009c83c0) (1) Data frame sent\nI0512 12:15:42.241797 4303 log.go:172] (0xc0008e0bb0) (0xc0009c83c0) Stream removed, broadcasting: 1\nI0512 12:15:42.241818 4303 log.go:172] (0xc0008e0bb0) Go away received\nI0512 12:15:42.242156 4303 log.go:172] (0xc0008e0bb0) (0xc0009c83c0) Stream removed, broadcasting: 1\nI0512 12:15:42.242173 4303 log.go:172] (0xc0008e0bb0) (0xc0006fbf40) Stream removed, broadcasting: 3\nI0512 12:15:42.242182 4303 log.go:172] (0xc0008e0bb0) (0xc000656280) Stream removed, broadcasting: 5\n" May 12 12:15:42.246: INFO: stdout: "" May 12 12:15:42.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6556 execpod-affinityggff5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30967' May 12 12:15:42.460: INFO: stderr: "I0512 12:15:42.379789 4321 log.go:172] (0xc0009640b0) (0xc0005910e0) Create stream\nI0512 12:15:42.379864 4321 log.go:172] (0xc0009640b0) (0xc0005910e0) Stream added, broadcasting: 1\nI0512 12:15:42.381983 4321 log.go:172] (0xc0009640b0) Reply frame received for 1\nI0512 12:15:42.382037 4321 log.go:172] (0xc0009640b0) (0xc00054ac80) Create stream\nI0512 12:15:42.382052 4321 log.go:172] (0xc0009640b0) (0xc00054ac80) Stream added, broadcasting: 3\nI0512 12:15:42.383146 4321 log.go:172] (0xc0009640b0) Reply frame received for 3\nI0512 12:15:42.383194 4321 log.go:172] (0xc0009640b0) (0xc00035c8c0) Create stream\nI0512 12:15:42.383206 4321 log.go:172] (0xc0009640b0) (0xc00035c8c0) Stream added, broadcasting: 5\nI0512 12:15:42.384086 4321 log.go:172] (0xc0009640b0) Reply frame received for 5\nI0512 12:15:42.454051 4321 log.go:172] (0xc0009640b0) Data frame received for 3\nI0512 12:15:42.454090 4321 log.go:172] (0xc00054ac80) (3) Data frame handling\nI0512 12:15:42.454134 4321 log.go:172] (0xc0009640b0) Data frame received for 5\nI0512 12:15:42.454149 4321 log.go:172] (0xc00035c8c0) (5) Data frame handling\nI0512 12:15:42.454156 4321 log.go:172] (0xc00035c8c0) (5) Data frame sent\nI0512 12:15:42.454162 4321 log.go:172] (0xc0009640b0) Data frame received for 5\nI0512 12:15:42.454165 4321 log.go:172] (0xc00035c8c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30967\nConnection to 172.17.0.12 30967 port [tcp/30967] succeeded!\nI0512 12:15:42.455523 4321 log.go:172] (0xc0009640b0) Data frame received for 1\nI0512 12:15:42.455549 4321 log.go:172] (0xc0005910e0) (1) Data frame handling\nI0512 12:15:42.455563 4321 log.go:172] (0xc0005910e0) (1) Data frame sent\nI0512 12:15:42.455577 4321 log.go:172] (0xc0009640b0) (0xc0005910e0) Stream removed, broadcasting: 1\nI0512 12:15:42.455590 4321 log.go:172] (0xc0009640b0) Go away received\nI0512 12:15:42.455900 4321 log.go:172] (0xc0009640b0) (0xc0005910e0) Stream removed, broadcasting: 1\nI0512 12:15:42.455914 4321 log.go:172] (0xc0009640b0) (0xc00054ac80) Stream removed, broadcasting: 3\nI0512 12:15:42.455919 4321 log.go:172] (0xc0009640b0) (0xc00035c8c0) Stream removed, broadcasting: 5\n" May 12 12:15:42.460: INFO: stdout: "" May 12 12:15:42.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6556 execpod-affinityggff5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30967/ ; done' May 12 12:15:42.756: INFO: stderr: "I0512 12:15:42.588083 4341 log.go:172] (0xc0000e8420) (0xc0003e7a40) Create stream\nI0512 12:15:42.588127 4341 log.go:172] (0xc0000e8420) (0xc0003e7a40) Stream added, broadcasting: 1\nI0512 12:15:42.590152 4341 log.go:172] (0xc0000e8420) Reply frame received for 1\nI0512 12:15:42.590185 4341 log.go:172] (0xc0000e8420) (0xc00034ce60) Create stream\nI0512 12:15:42.590194 4341 log.go:172] (0xc0000e8420) (0xc00034ce60) Stream added, broadcasting: 3\nI0512 12:15:42.590997 4341 log.go:172] (0xc0000e8420) Reply frame received for 3\nI0512 12:15:42.591025 4341 log.go:172] (0xc0000e8420) (0xc0000f3900) Create stream\nI0512 12:15:42.591043 4341 log.go:172] (0xc0000e8420) (0xc0000f3900) Stream added, broadcasting: 5\nI0512 12:15:42.591940 4341 log.go:172] (0xc0000e8420) Reply frame received for 5\nI0512 12:15:42.669213 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.669259 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.669275 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.669286 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.669296 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.669327 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.669349 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.669382 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.669398 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\nI0512 12:15:42.669408 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.669417 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.669436 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\nI0512 12:15:42.672967 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.672978 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.672984 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.673898 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.673910 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.673917 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.673951 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.673983 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.673999 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.677374 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.677387 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.677395 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.677866 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.677879 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.677886 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.677895 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.677904 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.677916 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.681260 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.681277 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.681286 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.681703 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.681714 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.681720 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.681729 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.681733 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.681738 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.687132 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.687143 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.687150 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.687820 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.687857 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.687870 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.687892 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.687905 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.687929 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.691529 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.691545 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.691559 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.691908 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.691923 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.691931 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.691942 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.691948 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.691955 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.695801 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.695817 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.695829 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.696427 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.696452 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.696461 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.696491 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.696501 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.696508 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.700876 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.700895 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.700909 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.701571 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.701593 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.701607 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.701620 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.701642 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.701656 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.705914 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.705943 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.705985 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.706414 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.706442 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.706482 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.706498 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.706511 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.706530 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.710671 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.710698 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.710716 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.711336 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.711367 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.711388 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.711419 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.711437 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.711464 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.715759 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.715779 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.715796 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.716167 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.716195 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.716217 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.716253 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.716274 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.716292 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\nI0512 12:15:42.716311 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.716327 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.716368 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\nI0512 12:15:42.722359 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.722390 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.722411 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.723230 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.723280 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.723303 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.723323 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.723336 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.723358 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.727551 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.727591 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.727613 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.728007 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.728040 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.728062 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.728100 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.728124 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.728143 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.734503 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.734530 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.734556 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.735199 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.735237 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.735255 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.735282 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.735299 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.735315 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.739483 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.739536 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.739573 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.739979 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.740004 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.740027 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.740039 4341 log.go:172] (0xc0000f3900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.740051 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.740064 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.747106 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.747126 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.747163 4341 log.go:172] (0xc00034ce60) (3) Data frame sent\nI0512 12:15:42.747660 4341 log.go:172] (0xc0000e8420) Data frame received for 3\nI0512 12:15:42.747680 4341 log.go:172] (0xc00034ce60) (3) Data frame handling\nI0512 12:15:42.747696 4341 log.go:172] (0xc0000e8420) Data frame received for 5\nI0512 12:15:42.747701 4341 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0512 12:15:42.749612 4341 log.go:172] (0xc0000e8420) Data frame received for 1\nI0512 12:15:42.749641 4341 log.go:172] (0xc0003e7a40) (1) Data frame handling\nI0512 12:15:42.749664 4341 log.go:172] (0xc0003e7a40) (1) Data frame sent\nI0512 12:15:42.749688 4341 log.go:172] (0xc0000e8420) (0xc0003e7a40) Stream removed, broadcasting: 1\nI0512 12:15:42.749802 4341 log.go:172] (0xc0000e8420) Go away received\nI0512 12:15:42.750226 4341 log.go:172] (0xc0000e8420) (0xc0003e7a40) Stream removed, broadcasting: 1\nI0512 12:15:42.750265 4341 log.go:172] (0xc0000e8420) (0xc00034ce60) Stream removed, broadcasting: 3\nI0512 12:15:42.750298 4341 log.go:172] (0xc0000e8420) (0xc0000f3900) Stream removed, broadcasting: 5\n" May 12 12:15:42.756: INFO: stdout: "\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6\naffinity-nodeport-timeout-qwtb6" May 12 12:15:42.756: INFO: Received response from host: May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Received response from host: affinity-nodeport-timeout-qwtb6 May 12 12:15:42.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6556 execpod-affinityggff5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30967/' May 12 12:15:42.971: INFO: stderr: "I0512 12:15:42.887285 4361 log.go:172] (0xc000bb0790) (0xc0009ea3c0) Create stream\nI0512 12:15:42.887335 4361 log.go:172] (0xc000bb0790) (0xc0009ea3c0) Stream added, broadcasting: 1\nI0512 12:15:42.891190 4361 log.go:172] (0xc000bb0790) Reply frame received for 1\nI0512 12:15:42.891230 4361 log.go:172] (0xc000bb0790) (0xc000822b40) Create stream\nI0512 12:15:42.891239 4361 log.go:172] (0xc000bb0790) (0xc000822b40) Stream added, broadcasting: 3\nI0512 12:15:42.892212 4361 log.go:172] (0xc000bb0790) Reply frame received for 3\nI0512 12:15:42.892260 4361 log.go:172] (0xc000bb0790) (0xc000834000) Create stream\nI0512 12:15:42.892277 4361 log.go:172] (0xc000bb0790) (0xc000834000) Stream added, broadcasting: 5\nI0512 12:15:42.893085 4361 log.go:172] (0xc000bb0790) Reply frame received for 5\nI0512 12:15:42.959787 4361 log.go:172] (0xc000bb0790) Data frame received for 5\nI0512 12:15:42.959835 4361 log.go:172] (0xc000834000) (5) Data frame handling\nI0512 12:15:42.959868 4361 log.go:172] (0xc000834000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:42.964627 4361 log.go:172] (0xc000bb0790) Data frame received for 3\nI0512 12:15:42.964652 4361 log.go:172] (0xc000822b40) (3) Data frame handling\nI0512 12:15:42.964679 4361 log.go:172] (0xc000822b40) (3) Data frame sent\nI0512 12:15:42.965442 4361 log.go:172] (0xc000bb0790) Data frame received for 3\nI0512 12:15:42.965532 4361 log.go:172] (0xc000822b40) (3) Data frame handling\nI0512 12:15:42.965563 4361 log.go:172] (0xc000bb0790) Data frame received for 5\nI0512 12:15:42.965624 4361 log.go:172] (0xc000834000) (5) Data frame handling\nI0512 12:15:42.966950 4361 log.go:172] (0xc000bb0790) Data frame received for 1\nI0512 12:15:42.966984 4361 log.go:172] (0xc0009ea3c0) (1) Data frame handling\nI0512 12:15:42.967018 4361 log.go:172] (0xc0009ea3c0) (1) Data frame sent\nI0512 12:15:42.967045 4361 log.go:172] (0xc000bb0790) (0xc0009ea3c0) Stream removed, broadcasting: 1\nI0512 12:15:42.967071 4361 log.go:172] (0xc000bb0790) Go away received\nI0512 12:15:42.967384 4361 log.go:172] (0xc000bb0790) (0xc0009ea3c0) Stream removed, broadcasting: 1\nI0512 12:15:42.967397 4361 log.go:172] (0xc000bb0790) (0xc000822b40) Stream removed, broadcasting: 3\nI0512 12:15:42.967402 4361 log.go:172] (0xc000bb0790) (0xc000834000) Stream removed, broadcasting: 5\n" May 12 12:15:42.972: INFO: stdout: "affinity-nodeport-timeout-qwtb6" May 12 12:15:57.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6556 execpod-affinityggff5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30967/' May 12 12:15:58.172: INFO: stderr: "I0512 12:15:58.110084 4381 log.go:172] (0xc000b5d970) (0xc00023adc0) Create stream\nI0512 12:15:58.110127 4381 log.go:172] (0xc000b5d970) (0xc00023adc0) Stream added, broadcasting: 1\nI0512 12:15:58.111767 4381 log.go:172] (0xc000b5d970) Reply frame received for 1\nI0512 12:15:58.111796 4381 log.go:172] (0xc000b5d970) (0xc00023b360) Create stream\nI0512 12:15:58.111805 4381 log.go:172] (0xc000b5d970) (0xc00023b360) Stream added, broadcasting: 3\nI0512 12:15:58.112473 4381 log.go:172] (0xc000b5d970) Reply frame received for 3\nI0512 12:15:58.112508 4381 log.go:172] (0xc000b5d970) (0xc000383c20) Create stream\nI0512 12:15:58.112524 4381 log.go:172] (0xc000b5d970) (0xc000383c20) Stream added, broadcasting: 5\nI0512 12:15:58.113717 4381 log.go:172] (0xc000b5d970) Reply frame received for 5\nI0512 12:15:58.163382 4381 log.go:172] (0xc000b5d970) Data frame received for 5\nI0512 12:15:58.163406 4381 log.go:172] (0xc000383c20) (5) Data frame handling\nI0512 12:15:58.163423 4381 log.go:172] (0xc000383c20) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30967/\nI0512 12:15:58.166962 4381 log.go:172] (0xc000b5d970) Data frame received for 3\nI0512 12:15:58.166987 4381 log.go:172] (0xc00023b360) (3) Data frame handling\nI0512 12:15:58.167007 4381 log.go:172] (0xc00023b360) (3) Data frame sent\nI0512 12:15:58.167382 4381 log.go:172] (0xc000b5d970) Data frame received for 5\nI0512 12:15:58.167407 4381 log.go:172] (0xc000383c20) (5) Data frame handling\nI0512 12:15:58.167666 4381 log.go:172] (0xc000b5d970) Data frame received for 3\nI0512 12:15:58.167700 4381 log.go:172] (0xc00023b360) (3) Data frame handling\nI0512 12:15:58.168511 4381 log.go:172] (0xc000b5d970) Data frame received for 1\nI0512 12:15:58.168532 4381 log.go:172] (0xc00023adc0) (1) Data frame handling\nI0512 12:15:58.168547 4381 log.go:172] (0xc00023adc0) (1) Data frame sent\nI0512 12:15:58.168572 4381 log.go:172] (0xc000b5d970) (0xc00023adc0) Stream removed, broadcasting: 1\nI0512 12:15:58.168584 4381 log.go:172] (0xc000b5d970) Go away received\nI0512 12:15:58.169024 4381 log.go:172] (0xc000b5d970) (0xc00023adc0) Stream removed, broadcasting: 1\nI0512 12:15:58.169045 4381 log.go:172] (0xc000b5d970) (0xc00023b360) Stream removed, broadcasting: 3\nI0512 12:15:58.169066 4381 log.go:172] (0xc000b5d970) (0xc000383c20) Stream removed, broadcasting: 5\n" May 12 12:15:58.172: INFO: stdout: "affinity-nodeport-timeout-pb8f5" May 12 12:15:58.172: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-6556, will wait for the garbage collector to delete the pods May 12 12:15:58.550: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 247.065398ms May 12 12:15:58.850: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 300.193905ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:16:15.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6556" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:63.470 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":279,"skipped":4649,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:16:15.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 12 12:16:15.960: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b03496b3-ae4b-490d-a6dc-78d16b321f59" in namespace "projected-2558" to be "Succeeded or Failed" May 12 12:16:15.978: INFO: Pod "downwardapi-volume-b03496b3-ae4b-490d-a6dc-78d16b321f59": Phase="Pending", Reason="", readiness=false. Elapsed: 17.362112ms May 12 12:16:18.143: INFO: Pod "downwardapi-volume-b03496b3-ae4b-490d-a6dc-78d16b321f59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182569822s May 12 12:16:20.317: INFO: Pod "downwardapi-volume-b03496b3-ae4b-490d-a6dc-78d16b321f59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.356556254s STEP: Saw pod success May 12 12:16:20.317: INFO: Pod "downwardapi-volume-b03496b3-ae4b-490d-a6dc-78d16b321f59" satisfied condition "Succeeded or Failed" May 12 12:16:20.356: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b03496b3-ae4b-490d-a6dc-78d16b321f59 container client-container: STEP: delete the pod May 12 12:16:20.540: INFO: Waiting for pod downwardapi-volume-b03496b3-ae4b-490d-a6dc-78d16b321f59 to disappear May 12 12:16:20.547: INFO: Pod downwardapi-volume-b03496b3-ae4b-490d-a6dc-78d16b321f59 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:16:20.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2558" for this suite. • [SLOW TEST:5.010 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":280,"skipped":4681,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:16:20.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-6bcb5402-f905-418d-85d6-06fa153b1138 STEP: Creating a pod to test consume secrets May 12 12:16:20.684: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-268abff2-e923-46f5-8e58-7fdd59254a85" in namespace "projected-469" to be "Succeeded or Failed" May 12 12:16:20.742: INFO: Pod "pod-projected-secrets-268abff2-e923-46f5-8e58-7fdd59254a85": Phase="Pending", Reason="", readiness=false. Elapsed: 57.366422ms May 12 12:16:22.844: INFO: Pod "pod-projected-secrets-268abff2-e923-46f5-8e58-7fdd59254a85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159838533s May 12 12:16:24.847: INFO: Pod "pod-projected-secrets-268abff2-e923-46f5-8e58-7fdd59254a85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162909826s May 12 12:16:26.856: INFO: Pod "pod-projected-secrets-268abff2-e923-46f5-8e58-7fdd59254a85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.171284344s STEP: Saw pod success May 12 12:16:26.856: INFO: Pod "pod-projected-secrets-268abff2-e923-46f5-8e58-7fdd59254a85" satisfied condition "Succeeded or Failed" May 12 12:16:26.858: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-268abff2-e923-46f5-8e58-7fdd59254a85 container projected-secret-volume-test: STEP: delete the pod May 12 12:16:27.068: INFO: Waiting for pod pod-projected-secrets-268abff2-e923-46f5-8e58-7fdd59254a85 to disappear May 12 12:16:27.152: INFO: Pod pod-projected-secrets-268abff2-e923-46f5-8e58-7fdd59254a85 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:16:27.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-469" for this suite. • [SLOW TEST:6.952 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":281,"skipped":4695,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:16:27.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:16:29.444: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:16:31.508: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882589, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882589, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882589, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882589, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:16:33.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882589, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882589, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882589, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882589, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:16:36.635: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 12:16:36.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1035-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:16:39.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9739" for this suite. STEP: Destroying namespace "webhook-9739-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.904 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":282,"skipped":4717,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:16:39.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 12:16:39.727: INFO: Creating deployment "test-recreate-deployment" May 12 12:16:39.763: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 12 12:16:39.819: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 12 12:16:41.825: INFO: Waiting deployment "test-recreate-deployment" to complete May 12 12:16:41.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882599, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882599, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882600, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882599, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:16:43.830: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 12 12:16:43.838: INFO: Updating deployment test-recreate-deployment May 12 12:16:43.838: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 12 12:16:44.995: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5435 /apis/apps/v1/namespaces/deployment-5435/deployments/test-recreate-deployment 07f5d27b-3a22-4a60-8474-e6e89347ceb1 3812769 2 2020-05-12 12:16:39 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-12 12:16:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-12 12:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004103a48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-12 12:16:44 +0000 UTC,LastTransitionTime:2020-05-12 12:16:44 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-12 12:16:44 +0000 UTC,LastTransitionTime:2020-05-12 12:16:39 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 12 12:16:44.998: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-5435 /apis/apps/v1/namespaces/deployment-5435/replicasets/test-recreate-deployment-d5667d9c7 628b275a-e6c5-4ed5-b27a-79fd9f8b56ae 3812767 1 2020-05-12 12:16:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 07f5d27b-3a22-4a60-8474-e6e89347ceb1 0xc004103f50 0xc004103f51}] [] [{kube-controller-manager Update apps/v1 2020-05-12 12:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07f5d27b-3a22-4a60-8474-e6e89347ceb1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004103fc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 12:16:44.998: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 12 12:16:44.998: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-5435 /apis/apps/v1/namespaces/deployment-5435/replicasets/test-recreate-deployment-6d65b9f6d8 144bf695-0c3f-4d87-8c61-173e0eedacc0 3812758 2 2020-05-12 12:16:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 07f5d27b-3a22-4a60-8474-e6e89347ceb1 0xc004103e47 0xc004103e48}] [] [{kube-controller-manager Update apps/v1 2020-05-12 12:16:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07f5d27b-3a22-4a60-8474-e6e89347ceb1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004103ed8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 12:16:45.145: INFO: Pod "test-recreate-deployment-d5667d9c7-ssmf9" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-ssmf9 test-recreate-deployment-d5667d9c7- deployment-5435 /api/v1/namespaces/deployment-5435/pods/test-recreate-deployment-d5667d9c7-ssmf9 fce2835a-5838-4730-88cf-8b016ef5e26c 3812770 0 2020-05-12 12:16:44 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 628b275a-e6c5-4ed5-b27a-79fd9f8b56ae 0xc0040b64a0 0xc0040b64a1}] [] [{kube-controller-manager Update v1 2020-05-12 12:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"628b275a-e6c5-4ed5-b27a-79fd9f8b56ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-12 12:16:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2rwfn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2rwfn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2rwfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:16:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:16:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:16:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 12:16:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-12 12:16:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:16:45.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5435" for this suite. • [SLOW TEST:6.116 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":283,"skipped":4724,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:16:45.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 12:16:48.367: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 12:16:50.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882608, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882608, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882608, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882608, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 12:16:52.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882608, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882608, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882608, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724882608, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 12:16:55.455: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:16:56.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3496" for this suite. STEP: Destroying namespace "webhook-3496-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.566 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":284,"skipped":4740,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:16:56.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 12 12:16:56.278: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 12 12:16:59.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-169 create -f -' May 12 12:17:03.968: INFO: stderr: "" May 12 12:17:03.968: INFO: stdout: "e2e-test-crd-publish-openapi-6092-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 12 12:17:03.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-169 delete e2e-test-crd-publish-openapi-6092-crds test-cr' May 12 12:17:04.159: INFO: stderr: "" May 12 12:17:04.159: INFO: stdout: "e2e-test-crd-publish-openapi-6092-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 12 12:17:04.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-169 apply -f -' May 12 12:17:04.455: INFO: stderr: "" May 12 12:17:04.455: INFO: stdout: "e2e-test-crd-publish-openapi-6092-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 12 12:17:04.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-169 delete e2e-test-crd-publish-openapi-6092-crds test-cr' May 12 12:17:04.596: INFO: stderr: "" May 12 12:17:04.596: INFO: stdout: "e2e-test-crd-publish-openapi-6092-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 12 12:17:04.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6092-crds' May 12 12:17:04.869: INFO: stderr: "" May 12 12:17:04.869: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6092-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:17:07.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-169" for this suite. • [SLOW TEST:11.671 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":285,"skipped":4748,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:17:07.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1460.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1460.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 12:17:15.951: INFO: DNS probes using dns-1460/dns-test-08398a8d-bf16-4bac-ac39-410a1c12dd0c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:17:15.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1460" for this suite. • [SLOW TEST:8.184 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":286,"skipped":4771,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:17:16.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-e65c9758-6f2a-4391-99d5-e36e78e13aef STEP: Creating a pod to test consume configMaps May 12 12:17:16.265: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-289aced2-cb54-4491-83c5-b93f83c8fca7" in namespace "projected-5486" to be "Succeeded or Failed" May 12 12:17:16.287: INFO: Pod "pod-projected-configmaps-289aced2-cb54-4491-83c5-b93f83c8fca7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.309119ms May 12 12:17:18.317: INFO: Pod "pod-projected-configmaps-289aced2-cb54-4491-83c5-b93f83c8fca7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05201024s May 12 12:17:20.559: INFO: Pod "pod-projected-configmaps-289aced2-cb54-4491-83c5-b93f83c8fca7": Phase="Running", Reason="", readiness=true. Elapsed: 4.293532916s May 12 12:17:22.562: INFO: Pod "pod-projected-configmaps-289aced2-cb54-4491-83c5-b93f83c8fca7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.296990949s STEP: Saw pod success May 12 12:17:22.562: INFO: Pod "pod-projected-configmaps-289aced2-cb54-4491-83c5-b93f83c8fca7" satisfied condition "Succeeded or Failed" May 12 12:17:22.565: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-289aced2-cb54-4491-83c5-b93f83c8fca7 container projected-configmap-volume-test: STEP: delete the pod May 12 12:17:22.648: INFO: Waiting for pod pod-projected-configmaps-289aced2-cb54-4491-83c5-b93f83c8fca7 to disappear May 12 12:17:22.657: INFO: Pod pod-projected-configmaps-289aced2-cb54-4491-83c5-b93f83c8fca7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:17:22.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5486" for this suite. • [SLOW TEST:6.658 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":287,"skipped":4787,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 12:17:22.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-2d2d4083-027b-428f-b677-2493147e5ee2 STEP: Creating a pod to test consume configMaps May 12 12:17:22.724: INFO: Waiting up to 5m0s for pod "pod-configmaps-9fd74134-c0b8-431c-a81c-30e1e26dcf4b" in namespace "configmap-5016" to be "Succeeded or Failed" May 12 12:17:22.742: INFO: Pod "pod-configmaps-9fd74134-c0b8-431c-a81c-30e1e26dcf4b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.83256ms May 12 12:17:24.850: INFO: Pod "pod-configmaps-9fd74134-c0b8-431c-a81c-30e1e26dcf4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12595268s May 12 12:17:26.855: INFO: Pod "pod-configmaps-9fd74134-c0b8-431c-a81c-30e1e26dcf4b": Phase="Running", Reason="", readiness=true. Elapsed: 4.130419699s May 12 12:17:28.858: INFO: Pod "pod-configmaps-9fd74134-c0b8-431c-a81c-30e1e26dcf4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133835852s STEP: Saw pod success May 12 12:17:28.858: INFO: Pod "pod-configmaps-9fd74134-c0b8-431c-a81c-30e1e26dcf4b" satisfied condition "Succeeded or Failed" May 12 12:17:28.860: INFO: Trying to get logs from node latest-worker pod pod-configmaps-9fd74134-c0b8-431c-a81c-30e1e26dcf4b container configmap-volume-test: STEP: delete the pod May 12 12:17:28.917: INFO: Waiting for pod pod-configmaps-9fd74134-c0b8-431c-a81c-30e1e26dcf4b to disappear May 12 12:17:28.922: INFO: Pod pod-configmaps-9fd74134-c0b8-431c-a81c-30e1e26dcf4b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 12:17:28.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5016" for this suite. • [SLOW TEST:6.264 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":288,"skipped":4788,"failed":0} SSSSSSSSSSSSSSSSSSSMay 12 12:17:28.929: INFO: Running AfterSuite actions on all nodes May 12 12:17:28.929: INFO: Running AfterSuite actions on node 1 May 12 12:17:28.929: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 7326.943 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS