I0525 23:38:27.784224 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0525 23:38:27.784425 7 e2e.go:129] Starting e2e run "ce190069-f608-423e-bdd0-4e5c7740fb6a" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590449906 - Will randomize all specs Will run 288 of 5095 specs May 25 23:38:27.837: INFO: >>> kubeConfig: /root/.kube/config May 25 23:38:27.839: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 25 23:38:27.870: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 25 23:38:27.901: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 25 23:38:27.901: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 25 23:38:27.901: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 25 23:38:27.907: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 25 23:38:27.907: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 25 23:38:27.907: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 25 23:38:27.909: INFO: kube-apiserver version: v1.18.2 May 25 23:38:27.909: INFO: >>> kubeConfig: /root/.kube/config May 25 23:38:27.914: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:38:27.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook May 25 23:38:27.966: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 23:38:28.557: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 23:38:30.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046708, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046708, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046708, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046708, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 23:38:32.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046708, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046708, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046708, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046708, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 23:38:35.648: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 23:38:35.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9133-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:38:36.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8548" for this suite. STEP: Destroying namespace "webhook-8548-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.031 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":1,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:38:36.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 23:38:38.213: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 23:38:40.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046718, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046718, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046718, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046718, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 23:38:42.227: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046718, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046718, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046718, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046718, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 23:38:45.294: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 23:38:45.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:38:46.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6943" for this suite. STEP: Destroying namespace "webhook-6943-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.712 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":2,"skipped":52,"failed":0} SS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:38:46.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2677 STEP: creating service affinity-clusterip in namespace services-2677 STEP: creating replication controller affinity-clusterip in namespace services-2677 I0525 23:38:46.929869 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2677, replica count: 3 I0525 23:38:49.980273 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 23:38:52.980736 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 23:38:52.988: INFO: Creating new exec pod May 25 23:38:58.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2677 execpod-affinitysn7lt -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 25 23:39:00.743: INFO: stderr: "I0525 23:39:00.593957 30 log.go:172] (0xc00003ad10) (0xc00060ee60) Create stream\nI0525 23:39:00.594023 30 log.go:172] (0xc00003ad10) (0xc00060ee60) Stream added, broadcasting: 1\nI0525 23:39:00.604248 30 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0525 23:39:00.604312 30 log.go:172] (0xc00003ad10) (0xc0005e6be0) Create stream\nI0525 23:39:00.604327 30 log.go:172] (0xc00003ad10) (0xc0005e6be0) Stream added, broadcasting: 3\nI0525 23:39:00.605623 30 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0525 23:39:00.605662 30 log.go:172] (0xc00003ad10) (0xc0005e7b80) Create stream\nI0525 23:39:00.605674 30 log.go:172] (0xc00003ad10) (0xc0005e7b80) Stream added, broadcasting: 5\nI0525 23:39:00.606726 30 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0525 23:39:00.728655 30 log.go:172] (0xc00003ad10) Data frame received for 5\nI0525 23:39:00.728714 30 log.go:172] (0xc0005e7b80) (5) Data frame handling\nI0525 23:39:00.728749 30 log.go:172] (0xc0005e7b80) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0525 23:39:00.731924 30 log.go:172] (0xc00003ad10) Data frame received for 5\nI0525 23:39:00.731951 30 log.go:172] (0xc0005e7b80) (5) Data frame handling\nI0525 23:39:00.732138 30 log.go:172] (0xc0005e7b80) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0525 23:39:00.732154 30 log.go:172] (0xc00003ad10) Data frame received for 5\nI0525 23:39:00.732193 30 log.go:172] (0xc0005e7b80) (5) Data frame handling\nI0525 23:39:00.732651 30 log.go:172] (0xc00003ad10) Data frame received for 3\nI0525 23:39:00.732663 30 log.go:172] (0xc0005e6be0) (3) Data frame handling\nI0525 23:39:00.734888 30 log.go:172] (0xc00003ad10) Data frame received for 1\nI0525 23:39:00.734905 30 log.go:172] (0xc00060ee60) (1) Data frame handling\nI0525 23:39:00.734913 30 log.go:172] (0xc00060ee60) (1) Data frame sent\nI0525 23:39:00.734921 30 log.go:172] (0xc00003ad10) (0xc00060ee60) Stream removed, broadcasting: 1\nI0525 23:39:00.734931 30 log.go:172] (0xc00003ad10) Go away received\nI0525 23:39:00.735464 30 log.go:172] (0xc00003ad10) (0xc00060ee60) Stream removed, broadcasting: 1\nI0525 23:39:00.735492 30 log.go:172] (0xc00003ad10) (0xc0005e6be0) Stream removed, broadcasting: 3\nI0525 23:39:00.735506 30 log.go:172] (0xc00003ad10) (0xc0005e7b80) Stream removed, broadcasting: 5\n" May 25 23:39:00.743: INFO: stdout: "" May 25 23:39:00.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2677 execpod-affinitysn7lt -- /bin/sh -x -c nc -zv -t -w 2 10.104.244.57 80' May 25 23:39:00.980: INFO: stderr: "I0525 23:39:00.894692 62 log.go:172] (0xc000a83550) (0xc0009d6820) Create stream\nI0525 23:39:00.894767 62 log.go:172] (0xc000a83550) (0xc0009d6820) Stream added, broadcasting: 1\nI0525 23:39:00.899455 62 log.go:172] (0xc000a83550) Reply frame received for 1\nI0525 23:39:00.899491 62 log.go:172] (0xc000a83550) (0xc0003ee5a0) Create stream\nI0525 23:39:00.899509 62 log.go:172] (0xc000a83550) (0xc0003ee5a0) Stream added, broadcasting: 3\nI0525 23:39:00.900356 62 log.go:172] (0xc000a83550) Reply frame received for 3\nI0525 23:39:00.900388 62 log.go:172] (0xc000a83550) (0xc0009d6000) Create stream\nI0525 23:39:00.900394 62 log.go:172] (0xc000a83550) (0xc0009d6000) Stream added, broadcasting: 5\nI0525 23:39:00.901274 62 log.go:172] (0xc000a83550) Reply frame received for 5\nI0525 23:39:00.968759 62 log.go:172] (0xc000a83550) Data frame received for 3\nI0525 23:39:00.968800 62 log.go:172] (0xc0003ee5a0) (3) Data frame handling\nI0525 23:39:00.970173 62 log.go:172] (0xc000a83550) Data frame received for 5\nI0525 23:39:00.970202 62 log.go:172] (0xc0009d6000) (5) Data frame handling\nI0525 23:39:00.970216 62 log.go:172] (0xc0009d6000) (5) Data frame sent\nI0525 23:39:00.970227 62 log.go:172] (0xc000a83550) Data frame received for 5\nI0525 23:39:00.970237 62 log.go:172] (0xc0009d6000) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.244.57 80\nConnection to 10.104.244.57 80 port [tcp/http] succeeded!\nI0525 23:39:00.974293 62 log.go:172] (0xc000a83550) Data frame received for 1\nI0525 23:39:00.974365 62 log.go:172] (0xc0009d6820) (1) Data frame handling\nI0525 23:39:00.974415 62 log.go:172] (0xc0009d6820) (1) Data frame sent\nI0525 23:39:00.974458 62 log.go:172] (0xc000a83550) (0xc0009d6820) Stream removed, broadcasting: 1\nI0525 23:39:00.974513 62 log.go:172] (0xc000a83550) Go away received\nI0525 23:39:00.975146 62 log.go:172] (0xc000a83550) (0xc0009d6820) Stream removed, broadcasting: 1\nI0525 23:39:00.975179 62 log.go:172] (0xc000a83550) (0xc0003ee5a0) Stream removed, broadcasting: 3\nI0525 23:39:00.975193 62 log.go:172] (0xc000a83550) (0xc0009d6000) Stream removed, broadcasting: 5\n" May 25 23:39:00.980: INFO: stdout: "" May 25 23:39:00.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2677 execpod-affinitysn7lt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.244.57:80/ ; done' May 25 23:39:01.379: INFO: stderr: "I0525 23:39:01.107133 81 log.go:172] (0xc000418790) (0xc0006e4500) Create stream\nI0525 23:39:01.107201 81 log.go:172] (0xc000418790) (0xc0006e4500) Stream added, broadcasting: 1\nI0525 23:39:01.110617 81 log.go:172] (0xc000418790) Reply frame received for 1\nI0525 23:39:01.110681 81 log.go:172] (0xc000418790) (0xc000642460) Create stream\nI0525 23:39:01.110701 81 log.go:172] (0xc000418790) (0xc000642460) Stream added, broadcasting: 3\nI0525 23:39:01.111750 81 log.go:172] (0xc000418790) Reply frame received for 3\nI0525 23:39:01.111787 81 log.go:172] (0xc000418790) (0xc0006e5400) Create stream\nI0525 23:39:01.111801 81 log.go:172] (0xc000418790) (0xc0006e5400) Stream added, broadcasting: 5\nI0525 23:39:01.112716 81 log.go:172] (0xc000418790) Reply frame received for 5\nI0525 23:39:01.192648 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.192686 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.192700 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.192720 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.192730 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.192740 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.289799 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.289969 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.290021 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.290043 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.290065 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.290112 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.290153 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.290171 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.290192 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.297495 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.297537 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.297570 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.298188 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.298233 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.298248 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.298268 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.298280 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.298299 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.301650 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.301669 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.301687 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.302087 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.302116 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.302126 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.302205 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.302230 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.302260 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.305382 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.305412 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.305438 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.306109 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.306122 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.306130 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.306139 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.306144 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.306148 81 log.go:172] (0xc0006e5400) (5) Data frame sent\nI0525 23:39:01.306154 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.306159 81 log.go:172] (0xc0006e5400) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.306168 81 log.go:172] (0xc0006e5400) (5) Data frame sent\nI0525 23:39:01.313380 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.313407 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.313438 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.313762 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.313779 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.313789 81 log.go:172] (0xc0006e5400) (5) Data frame sent\nI0525 23:39:01.313795 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.313801 81 log.go:172] (0xc0006e5400) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.313815 81 log.go:172] (0xc0006e5400) (5) Data frame sent\nI0525 23:39:01.313823 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.313829 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.313835 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.320361 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.320391 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.320430 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.321016 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.321031 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.321039 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.321065 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.321098 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.321332 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.326605 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.326625 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.326645 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.327593 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.327626 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.327638 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.327682 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.327695 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.327706 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.333000 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.333021 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.333044 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.333561 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.333582 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.333601 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.333609 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.333621 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.333630 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.336587 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.336597 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.336605 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.337569 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.337594 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.337612 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.337691 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.337703 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.337715 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.340548 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.340566 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.340583 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.340881 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.340907 81 log.go:172] (0xc0006e5400) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.340918 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.340931 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.340940 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.340951 81 log.go:172] (0xc0006e5400) (5) Data frame sent\nI0525 23:39:01.343878 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.343902 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.343928 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.344212 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.344223 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.344228 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.344276 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.344285 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.344290 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.349103 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.349270 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.349290 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.349764 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.349787 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.349795 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.349810 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.349836 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.349864 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.353244 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.353261 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.353271 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.353671 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.353690 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.353700 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.353852 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.353862 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.353867 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.360367 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.360389 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.360405 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.360890 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.360910 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.360919 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.360987 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.361002 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.361010 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.366141 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.366180 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.366226 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.366778 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.366799 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.366814 81 log.go:172] (0xc0006e5400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.244.57:80/\nI0525 23:39:01.367069 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.367092 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.367114 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.371968 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.371986 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.371999 81 log.go:172] (0xc000642460) (3) Data frame sent\nI0525 23:39:01.372540 81 log.go:172] (0xc000418790) Data frame received for 5\nI0525 23:39:01.372562 81 log.go:172] (0xc0006e5400) (5) Data frame handling\nI0525 23:39:01.372724 81 log.go:172] (0xc000418790) Data frame received for 3\nI0525 23:39:01.372744 81 log.go:172] (0xc000642460) (3) Data frame handling\nI0525 23:39:01.374304 81 log.go:172] (0xc000418790) Data frame received for 1\nI0525 23:39:01.374334 81 log.go:172] (0xc0006e4500) (1) Data frame handling\nI0525 23:39:01.374348 81 log.go:172] (0xc0006e4500) (1) Data frame sent\nI0525 23:39:01.374363 81 log.go:172] (0xc000418790) (0xc0006e4500) Stream removed, broadcasting: 1\nI0525 23:39:01.374386 81 log.go:172] (0xc000418790) Go away received\nI0525 23:39:01.374735 81 log.go:172] (0xc000418790) (0xc0006e4500) Stream removed, broadcasting: 1\nI0525 23:39:01.374752 81 log.go:172] (0xc000418790) (0xc000642460) Stream removed, broadcasting: 3\nI0525 23:39:01.374759 81 log.go:172] (0xc000418790) (0xc0006e5400) Stream removed, broadcasting: 5\n" May 25 23:39:01.379: INFO: stdout: "\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb\naffinity-clusterip-b25mb" May 25 23:39:01.379: INFO: Received response from host: May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Received response from host: affinity-clusterip-b25mb May 25 23:39:01.379: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-2677, will wait for the garbage collector to delete the pods May 25 23:39:01.520: INFO: Deleting ReplicationController affinity-clusterip took: 6.803398ms May 25 23:39:02.020: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.252615ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:39:15.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2677" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.697 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":3,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:39:15.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-212b186a-f748-4c3e-b81c-fe3b42157eee STEP: Creating a pod to test consume secrets May 25 23:39:15.487: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1c4ce8e9-c4bf-4ed3-8590-64df4a2ce3ac" in namespace "projected-8750" to be "Succeeded or Failed" May 25 23:39:15.528: INFO: Pod "pod-projected-secrets-1c4ce8e9-c4bf-4ed3-8590-64df4a2ce3ac": Phase="Pending", Reason="", readiness=false. Elapsed: 41.815445ms May 25 23:39:17.532: INFO: Pod "pod-projected-secrets-1c4ce8e9-c4bf-4ed3-8590-64df4a2ce3ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045170789s May 25 23:39:19.536: INFO: Pod "pod-projected-secrets-1c4ce8e9-c4bf-4ed3-8590-64df4a2ce3ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049478334s STEP: Saw pod success May 25 23:39:19.536: INFO: Pod "pod-projected-secrets-1c4ce8e9-c4bf-4ed3-8590-64df4a2ce3ac" satisfied condition "Succeeded or Failed" May 25 23:39:19.540: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-1c4ce8e9-c4bf-4ed3-8590-64df4a2ce3ac container projected-secret-volume-test: STEP: delete the pod May 25 23:39:19.656: INFO: Waiting for pod pod-projected-secrets-1c4ce8e9-c4bf-4ed3-8590-64df4a2ce3ac to disappear May 25 23:39:19.761: INFO: Pod pod-projected-secrets-1c4ce8e9-c4bf-4ed3-8590-64df4a2ce3ac no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:39:19.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8750" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":4,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:39:19.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ed91e983-2284-4729-ab7a-13e3517cf289 STEP: Creating a pod to test consume configMaps May 25 23:39:19.966: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb901856-5a79-4a41-ba67-5c2a4204cd06" in namespace "configmap-3069" to be "Succeeded or Failed" May 25 23:39:19.969: INFO: Pod "pod-configmaps-bb901856-5a79-4a41-ba67-5c2a4204cd06": Phase="Pending", Reason="", readiness=false. Elapsed: 3.418392ms May 25 23:39:21.974: INFO: Pod "pod-configmaps-bb901856-5a79-4a41-ba67-5c2a4204cd06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008558316s May 25 23:39:23.979: INFO: Pod "pod-configmaps-bb901856-5a79-4a41-ba67-5c2a4204cd06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012759354s STEP: Saw pod success May 25 23:39:23.979: INFO: Pod "pod-configmaps-bb901856-5a79-4a41-ba67-5c2a4204cd06" satisfied condition "Succeeded or Failed" May 25 23:39:23.982: INFO: Trying to get logs from node latest-worker pod pod-configmaps-bb901856-5a79-4a41-ba67-5c2a4204cd06 container configmap-volume-test: STEP: delete the pod May 25 23:39:24.052: INFO: Waiting for pod pod-configmaps-bb901856-5a79-4a41-ba67-5c2a4204cd06 to disappear May 25 23:39:24.059: INFO: Pod pod-configmaps-bb901856-5a79-4a41-ba67-5c2a4204cd06 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:39:24.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3069" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":5,"skipped":140,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:39:24.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 25 23:39:24.189: INFO: Waiting up to 5m0s for pod "pod-dbb135a6-746c-4338-988b-0f0ae190ece4" in namespace "emptydir-4875" to be "Succeeded or Failed" May 25 23:39:24.214: INFO: Pod "pod-dbb135a6-746c-4338-988b-0f0ae190ece4": Phase="Pending", Reason="", readiness=false. Elapsed: 25.211044ms May 25 23:39:26.219: INFO: Pod "pod-dbb135a6-746c-4338-988b-0f0ae190ece4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029741849s May 25 23:39:28.223: INFO: Pod "pod-dbb135a6-746c-4338-988b-0f0ae190ece4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03432235s STEP: Saw pod success May 25 23:39:28.224: INFO: Pod "pod-dbb135a6-746c-4338-988b-0f0ae190ece4" satisfied condition "Succeeded or Failed" May 25 23:39:28.227: INFO: Trying to get logs from node latest-worker2 pod pod-dbb135a6-746c-4338-988b-0f0ae190ece4 container test-container: STEP: delete the pod May 25 23:39:28.248: INFO: Waiting for pod pod-dbb135a6-746c-4338-988b-0f0ae190ece4 to disappear May 25 23:39:28.252: INFO: Pod pod-dbb135a6-746c-4338-988b-0f0ae190ece4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:39:28.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4875" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":6,"skipped":160,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:39:28.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-9531 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9531 to expose endpoints map[] May 25 23:39:28.428: INFO: Get endpoints failed (22.255798ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 25 23:39:29.432: INFO: successfully validated that service multi-endpoint-test in namespace services-9531 exposes endpoints map[] (1.026211377s elapsed) STEP: Creating pod pod1 in namespace services-9531 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9531 to expose endpoints map[pod1:[100]] May 25 23:39:32.558: INFO: successfully validated that service multi-endpoint-test in namespace services-9531 exposes endpoints map[pod1:[100]] (3.083962342s elapsed) STEP: Creating pod pod2 in namespace services-9531 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9531 to expose endpoints map[pod1:[100] pod2:[101]] May 25 23:39:35.695: INFO: successfully validated that service multi-endpoint-test in namespace services-9531 exposes endpoints map[pod1:[100] pod2:[101]] (3.132040021s elapsed) STEP: Deleting pod pod1 in namespace services-9531 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9531 to expose endpoints map[pod2:[101]] May 25 23:39:36.776: INFO: successfully validated that service multi-endpoint-test in namespace services-9531 exposes endpoints map[pod2:[101]] (1.07672131s elapsed) STEP: Deleting pod pod2 in namespace services-9531 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9531 to expose endpoints map[] May 25 23:39:37.882: INFO: successfully validated that service multi-endpoint-test in namespace services-9531 exposes endpoints map[] (1.102094604s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:39:37.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9531" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:9.710 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":7,"skipped":224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:39:37.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 25 23:39:38.441: INFO: Waiting up to 5m0s for pod "downward-api-b20204bb-3f6f-4947-8a62-c725045b108b" in namespace "downward-api-1117" to be "Succeeded or Failed" May 25 23:39:38.476: INFO: Pod "downward-api-b20204bb-3f6f-4947-8a62-c725045b108b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.128543ms May 25 23:39:40.546: INFO: Pod "downward-api-b20204bb-3f6f-4947-8a62-c725045b108b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104787725s May 25 23:39:42.551: INFO: Pod "downward-api-b20204bb-3f6f-4947-8a62-c725045b108b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109662901s STEP: Saw pod success May 25 23:39:42.551: INFO: Pod "downward-api-b20204bb-3f6f-4947-8a62-c725045b108b" satisfied condition "Succeeded or Failed" May 25 23:39:42.554: INFO: Trying to get logs from node latest-worker pod downward-api-b20204bb-3f6f-4947-8a62-c725045b108b container dapi-container: STEP: delete the pod May 25 23:39:42.596: INFO: Waiting for pod downward-api-b20204bb-3f6f-4947-8a62-c725045b108b to disappear May 25 23:39:42.635: INFO: Pod downward-api-b20204bb-3f6f-4947-8a62-c725045b108b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:39:42.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1117" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":8,"skipped":257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:39:42.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 23:41:42.776: INFO: Deleting pod "var-expansion-94453e2f-5b8c-4f8b-86a7-a60cd56db608" in namespace "var-expansion-4783" May 25 23:41:42.781: INFO: Wait up to 5m0s for pod "var-expansion-94453e2f-5b8c-4f8b-86a7-a60cd56db608" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:41:48.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4783" for this suite. • [SLOW TEST:126.238 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":9,"skipped":282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:41:48.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 25 23:41:48.980: INFO: namespace kubectl-3123 May 25 23:41:48.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3123' May 25 23:41:49.369: INFO: stderr: "" May 25 23:41:49.369: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 25 23:41:50.372: INFO: Selector matched 1 pods for map[app:agnhost] May 25 23:41:50.372: INFO: Found 0 / 1 May 25 23:41:51.389: INFO: Selector matched 1 pods for map[app:agnhost] May 25 23:41:51.389: INFO: Found 0 / 1 May 25 23:41:52.374: INFO: Selector matched 1 pods for map[app:agnhost] May 25 23:41:52.374: INFO: Found 0 / 1 May 25 23:41:53.375: INFO: Selector matched 1 pods for map[app:agnhost] May 25 23:41:53.375: INFO: Found 1 / 1 May 25 23:41:53.375: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 25 23:41:53.378: INFO: Selector matched 1 pods for map[app:agnhost] May 25 23:41:53.378: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 25 23:41:53.378: INFO: wait on agnhost-master startup in kubectl-3123 May 25 23:41:53.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-kvbph agnhost-master --namespace=kubectl-3123' May 25 23:41:53.499: INFO: stderr: "" May 25 23:41:53.499: INFO: stdout: "Paused\n" STEP: exposing RC May 25 23:41:53.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3123' May 25 23:41:53.692: INFO: stderr: "" May 25 23:41:53.692: INFO: stdout: "service/rm2 exposed\n" May 25 23:41:53.752: INFO: Service rm2 in namespace kubectl-3123 found. STEP: exposing service May 25 23:41:55.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3123' May 25 23:41:55.971: INFO: stderr: "" May 25 23:41:55.971: INFO: stdout: "service/rm3 exposed\n" May 25 23:41:55.983: INFO: Service rm3 in namespace kubectl-3123 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:41:57.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3123" for this suite. • [SLOW TEST:9.115 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":10,"skipped":378,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:41:57.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 23:41:58.931: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 23:42:00.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046918, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046918, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046919, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046918, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 23:42:03.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046918, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046918, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046919, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726046918, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 23:42:05.996: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:42:06.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-91" for this suite. STEP: Destroying namespace "webhook-91-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.320 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":11,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:42:06.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 25 23:42:06.449: INFO: Waiting up to 5m0s for pod "var-expansion-516ad87c-fa14-4f10-9070-9b20ebfdec39" in namespace "var-expansion-2597" to be "Succeeded or Failed" May 25 23:42:06.478: INFO: Pod "var-expansion-516ad87c-fa14-4f10-9070-9b20ebfdec39": Phase="Pending", Reason="", readiness=false. Elapsed: 28.612771ms May 25 23:42:08.482: INFO: Pod "var-expansion-516ad87c-fa14-4f10-9070-9b20ebfdec39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03297389s May 25 23:42:10.487: INFO: Pod "var-expansion-516ad87c-fa14-4f10-9070-9b20ebfdec39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037322665s STEP: Saw pod success May 25 23:42:10.487: INFO: Pod "var-expansion-516ad87c-fa14-4f10-9070-9b20ebfdec39" satisfied condition "Succeeded or Failed" May 25 23:42:10.490: INFO: Trying to get logs from node latest-worker2 pod var-expansion-516ad87c-fa14-4f10-9070-9b20ebfdec39 container dapi-container: STEP: delete the pod May 25 23:42:10.536: INFO: Waiting for pod var-expansion-516ad87c-fa14-4f10-9070-9b20ebfdec39 to disappear May 25 23:42:10.561: INFO: Pod var-expansion-516ad87c-fa14-4f10-9070-9b20ebfdec39 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:42:10.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2597" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":12,"skipped":438,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:42:10.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 25 23:42:10.636: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9419' May 25 23:42:10.946: INFO: stderr: "" May 25 23:42:10.946: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 23:42:10.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9419' May 25 23:42:11.086: INFO: stderr: "" May 25 23:42:11.086: INFO: stdout: "update-demo-nautilus-dnqsm update-demo-nautilus-k6mrh " May 25 23:42:11.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dnqsm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9419' May 25 23:42:11.209: INFO: stderr: "" May 25 23:42:11.209: INFO: stdout: "" May 25 23:42:11.209: INFO: update-demo-nautilus-dnqsm is created but not running May 25 23:42:16.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9419' May 25 23:42:16.319: INFO: stderr: "" May 25 23:42:16.319: INFO: stdout: "update-demo-nautilus-dnqsm update-demo-nautilus-k6mrh " May 25 23:42:16.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dnqsm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9419' May 25 23:42:16.416: INFO: stderr: "" May 25 23:42:16.417: INFO: stdout: "true" May 25 23:42:16.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dnqsm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9419' May 25 23:42:16.512: INFO: stderr: "" May 25 23:42:16.512: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 23:42:16.512: INFO: validating pod update-demo-nautilus-dnqsm May 25 23:42:16.522: INFO: got data: { "image": "nautilus.jpg" } May 25 23:42:16.523: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 23:42:16.523: INFO: update-demo-nautilus-dnqsm is verified up and running May 25 23:42:16.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k6mrh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9419' May 25 23:42:16.615: INFO: stderr: "" May 25 23:42:16.615: INFO: stdout: "true" May 25 23:42:16.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k6mrh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9419' May 25 23:42:16.713: INFO: stderr: "" May 25 23:42:16.713: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 23:42:16.713: INFO: validating pod update-demo-nautilus-k6mrh May 25 23:42:16.728: INFO: got data: { "image": "nautilus.jpg" } May 25 23:42:16.728: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 23:42:16.728: INFO: update-demo-nautilus-k6mrh is verified up and running STEP: using delete to clean up resources May 25 23:42:16.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9419' May 25 23:42:16.863: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 23:42:16.863: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 25 23:42:16.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9419' May 25 23:42:16.987: INFO: stderr: "No resources found in kubectl-9419 namespace.\n" May 25 23:42:16.987: INFO: stdout: "" May 25 23:42:16.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9419 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 23:42:17.096: INFO: stderr: "" May 25 23:42:17.096: INFO: stdout: "update-demo-nautilus-dnqsm\nupdate-demo-nautilus-k6mrh\n" May 25 23:42:17.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9419' May 25 23:42:17.770: INFO: stderr: "No resources found in kubectl-9419 namespace.\n" May 25 23:42:17.770: INFO: stdout: "" May 25 23:42:17.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9419 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 23:42:17.883: INFO: stderr: "" May 25 23:42:17.883: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:42:17.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9419" for this suite. • [SLOW TEST:7.304 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":13,"skipped":454,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:42:17.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-139d61c4-4a9f-4908-8c05-4d6886bb7143 STEP: Creating a pod to test consume secrets May 25 23:42:18.312: INFO: Waiting up to 5m0s for pod "pod-secrets-838b18ea-c34e-4385-9c12-f7cfd9f780b7" in namespace "secrets-1429" to be "Succeeded or Failed" May 25 23:42:18.472: INFO: Pod "pod-secrets-838b18ea-c34e-4385-9c12-f7cfd9f780b7": Phase="Pending", Reason="", readiness=false. Elapsed: 160.368496ms May 25 23:42:20.476: INFO: Pod "pod-secrets-838b18ea-c34e-4385-9c12-f7cfd9f780b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164298718s May 25 23:42:22.491: INFO: Pod "pod-secrets-838b18ea-c34e-4385-9c12-f7cfd9f780b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.179116268s STEP: Saw pod success May 25 23:42:22.491: INFO: Pod "pod-secrets-838b18ea-c34e-4385-9c12-f7cfd9f780b7" satisfied condition "Succeeded or Failed" May 25 23:42:22.494: INFO: Trying to get logs from node latest-worker pod pod-secrets-838b18ea-c34e-4385-9c12-f7cfd9f780b7 container secret-volume-test: STEP: delete the pod May 25 23:42:22.561: INFO: Waiting for pod pod-secrets-838b18ea-c34e-4385-9c12-f7cfd9f780b7 to disappear May 25 23:42:22.580: INFO: Pod pod-secrets-838b18ea-c34e-4385-9c12-f7cfd9f780b7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:42:22.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1429" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":14,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:42:22.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-42fde28f-3174-4abd-88f7-231421d9a56a STEP: Creating a pod to test consume secrets May 25 23:42:22.649: INFO: Waiting up to 5m0s for pod "pod-secrets-521aada5-91d3-433c-9ec3-eb3dafc4641f" in namespace "secrets-3250" to be "Succeeded or Failed" May 25 23:42:22.717: INFO: Pod "pod-secrets-521aada5-91d3-433c-9ec3-eb3dafc4641f": Phase="Pending", Reason="", readiness=false. Elapsed: 67.733562ms May 25 23:42:24.722: INFO: Pod "pod-secrets-521aada5-91d3-433c-9ec3-eb3dafc4641f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072915877s May 25 23:42:26.726: INFO: Pod "pod-secrets-521aada5-91d3-433c-9ec3-eb3dafc4641f": Phase="Running", Reason="", readiness=true. Elapsed: 4.077376715s May 25 23:42:28.731: INFO: Pod "pod-secrets-521aada5-91d3-433c-9ec3-eb3dafc4641f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08200343s STEP: Saw pod success May 25 23:42:28.731: INFO: Pod "pod-secrets-521aada5-91d3-433c-9ec3-eb3dafc4641f" satisfied condition "Succeeded or Failed" May 25 23:42:28.734: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-521aada5-91d3-433c-9ec3-eb3dafc4641f container secret-env-test: STEP: delete the pod May 25 23:42:28.855: INFO: Waiting for pod pod-secrets-521aada5-91d3-433c-9ec3-eb3dafc4641f to disappear May 25 23:42:28.865: INFO: Pod pod-secrets-521aada5-91d3-433c-9ec3-eb3dafc4641f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:42:28.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3250" for this suite. • [SLOW TEST:6.284 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":15,"skipped":500,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:42:28.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:42:35.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5000" for this suite. STEP: Destroying namespace "nsdeletetest-6389" for this suite. May 25 23:42:35.305: INFO: Namespace nsdeletetest-6389 was already deleted STEP: Destroying namespace "nsdeletetest-1014" for this suite. • [SLOW TEST:6.438 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":16,"skipped":520,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:42:35.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 25 23:42:35.402: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:42:41.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3656" for this suite. • [SLOW TEST:6.601 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":17,"skipped":534,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:42:41.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1121 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1121;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1121 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1121;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1121.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1121.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1121.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1121.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1121.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1121.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1121.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1121.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1121.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1121.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1121.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1121.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1121.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 164.175.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.175.164_udp@PTR;check="$$(dig +tcp +noall +answer +search 164.175.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.175.164_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1121 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1121;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1121 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1121;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1121.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1121.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1121.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1121.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1121.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1121.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1121.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1121.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1121.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1121.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1121.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1121.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1121.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 164.175.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.175.164_udp@PTR;check="$$(dig +tcp +noall +answer +search 164.175.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.175.164_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 23:42:48.728: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.731: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.733: INFO: Unable to read wheezy_udp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.735: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.738: INFO: Unable to read wheezy_udp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.740: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.742: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.745: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.773: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.776: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.778: INFO: Unable to read jessie_udp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.780: INFO: Unable to read jessie_tcp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.783: INFO: Unable to read jessie_udp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.786: INFO: Unable to read jessie_tcp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.788: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.791: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:48.930: INFO: Lookups using dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1121 wheezy_tcp@dns-test-service.dns-1121 wheezy_udp@dns-test-service.dns-1121.svc wheezy_tcp@dns-test-service.dns-1121.svc wheezy_udp@_http._tcp.dns-test-service.dns-1121.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1121.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1121 jessie_tcp@dns-test-service.dns-1121 jessie_udp@dns-test-service.dns-1121.svc jessie_tcp@dns-test-service.dns-1121.svc jessie_udp@_http._tcp.dns-test-service.dns-1121.svc jessie_tcp@_http._tcp.dns-test-service.dns-1121.svc] May 25 23:42:53.936: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:53.941: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:53.945: INFO: Unable to read wheezy_udp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:53.948: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:53.951: INFO: Unable to read wheezy_udp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:53.954: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:53.983: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:53.986: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:53.989: INFO: Unable to read jessie_udp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:53.992: INFO: Unable to read jessie_tcp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:53.995: INFO: Unable to read jessie_udp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:53.998: INFO: Unable to read jessie_tcp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:54.024: INFO: Lookups using dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1121 wheezy_tcp@dns-test-service.dns-1121 wheezy_udp@dns-test-service.dns-1121.svc wheezy_tcp@dns-test-service.dns-1121.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1121 jessie_tcp@dns-test-service.dns-1121 jessie_udp@dns-test-service.dns-1121.svc jessie_tcp@dns-test-service.dns-1121.svc] May 25 23:42:58.935: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:58.938: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:58.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:58.944: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:58.947: INFO: Unable to read wheezy_udp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:58.950: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:58.993: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:58.996: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:58.999: INFO: Unable to read jessie_udp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:59.002: INFO: Unable to read jessie_tcp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:59.005: INFO: Unable to read jessie_udp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:59.008: INFO: Unable to read jessie_tcp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:42:59.080: INFO: Lookups using dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1121 wheezy_tcp@dns-test-service.dns-1121 wheezy_udp@dns-test-service.dns-1121.svc wheezy_tcp@dns-test-service.dns-1121.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1121 jessie_tcp@dns-test-service.dns-1121 jessie_udp@dns-test-service.dns-1121.svc jessie_tcp@dns-test-service.dns-1121.svc] May 25 23:43:03.937: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:03.948: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:03.953: INFO: Unable to read wheezy_udp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:03.955: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:03.957: INFO: Unable to read wheezy_udp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:03.959: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:03.985: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:03.988: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:03.990: INFO: Unable to read jessie_udp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:03.992: INFO: Unable to read jessie_tcp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:03.994: INFO: Unable to read jessie_udp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:03.996: INFO: Unable to read jessie_tcp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:04.011: INFO: Lookups using dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1121 wheezy_tcp@dns-test-service.dns-1121 wheezy_udp@dns-test-service.dns-1121.svc wheezy_tcp@dns-test-service.dns-1121.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1121 jessie_tcp@dns-test-service.dns-1121 jessie_udp@dns-test-service.dns-1121.svc jessie_tcp@dns-test-service.dns-1121.svc] May 25 23:43:08.935: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:08.939: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:08.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:08.946: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:08.949: INFO: Unable to read wheezy_udp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:08.952: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:08.978: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:08.981: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:08.984: INFO: Unable to read jessie_udp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:08.986: INFO: Unable to read jessie_tcp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:08.989: INFO: Unable to read jessie_udp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:08.992: INFO: Unable to read jessie_tcp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:09.013: INFO: Lookups using dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1121 wheezy_tcp@dns-test-service.dns-1121 wheezy_udp@dns-test-service.dns-1121.svc wheezy_tcp@dns-test-service.dns-1121.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1121 jessie_tcp@dns-test-service.dns-1121 jessie_udp@dns-test-service.dns-1121.svc jessie_tcp@dns-test-service.dns-1121.svc] May 25 23:43:13.936: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:13.940: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:13.944: INFO: Unable to read wheezy_udp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:13.947: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:13.949: INFO: Unable to read wheezy_udp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:13.952: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:13.982: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:13.986: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:13.989: INFO: Unable to read jessie_udp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:13.992: INFO: Unable to read jessie_tcp@dns-test-service.dns-1121 from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:13.995: INFO: Unable to read jessie_udp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:13.998: INFO: Unable to read jessie_tcp@dns-test-service.dns-1121.svc from pod dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9: the server could not find the requested resource (get pods dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9) May 25 23:43:14.023: INFO: Lookups using dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1121 wheezy_tcp@dns-test-service.dns-1121 wheezy_udp@dns-test-service.dns-1121.svc wheezy_tcp@dns-test-service.dns-1121.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1121 jessie_tcp@dns-test-service.dns-1121 jessie_udp@dns-test-service.dns-1121.svc jessie_tcp@dns-test-service.dns-1121.svc] May 25 23:43:19.025: INFO: DNS probes using dns-1121/dns-test-7d18b342-0c90-4eed-abea-3634f0a83aa9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:43:19.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1121" for this suite. • [SLOW TEST:38.018 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":18,"skipped":555,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:43:19.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 23:43:19.979: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 25 23:43:20.011: INFO: Pod name sample-pod: Found 0 pods out of 1 May 25 23:43:25.015: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 25 23:43:25.015: INFO: Creating deployment "test-rolling-update-deployment" May 25 23:43:25.063: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 25 23:43:25.071: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 25 23:43:27.079: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 25 23:43:27.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047005, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047005, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047005, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047005, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 23:43:29.488: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047005, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047005, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047008, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047005, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 23:43:31.087: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 25 23:43:31.105: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7074 /apis/apps/v1/namespaces/deployment-7074/deployments/test-rolling-update-deployment 4316f4b8-5b7d-4a20-ae6b-8e15e64633c1 7673223 1 2020-05-25 23:43:25 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-25 23:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-25 23:43:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00204e978 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-25 23:43:25 +0000 UTC,LastTransitionTime:2020-05-25 23:43:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-25 23:43:29 +0000 UTC,LastTransitionTime:2020-05-25 23:43:25 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 25 23:43:31.108: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-7074 /apis/apps/v1/namespaces/deployment-7074/replicasets/test-rolling-update-deployment-df7bb669b ed0348dc-9d55-42d8-9086-0edceca7e503 7673212 1 2020-05-25 23:43:25 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 4316f4b8-5b7d-4a20-ae6b-8e15e64633c1 0xc00204eed0 0xc00204eed1}] [] [{kube-controller-manager Update apps/v1 2020-05-25 23:43:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4316f4b8-5b7d-4a20-ae6b-8e15e64633c1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00204ef48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 25 23:43:31.108: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 25 23:43:31.108: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7074 /apis/apps/v1/namespaces/deployment-7074/replicasets/test-rolling-update-controller 0c2a17f8-3341-4de0-bf0a-ccc710337d9b 7673221 2 2020-05-25 23:43:19 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 4316f4b8-5b7d-4a20-ae6b-8e15e64633c1 0xc00204edbf 0xc00204edd0}] [] [{e2e.test Update apps/v1 2020-05-25 23:43:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-25 23:43:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4316f4b8-5b7d-4a20-ae6b-8e15e64633c1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00204ee68 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 23:43:31.111: INFO: Pod "test-rolling-update-deployment-df7bb669b-9lkgp" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-9lkgp test-rolling-update-deployment-df7bb669b- deployment-7074 /api/v1/namespaces/deployment-7074/pods/test-rolling-update-deployment-df7bb669b-9lkgp fc2f6d64-3cb0-4fbe-9cea-506f2f901648 7673211 0 2020-05-25 23:43:25 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b ed0348dc-9d55-42d8-9086-0edceca7e503 0xc00204f410 0xc00204f411}] [] [{kube-controller-manager Update v1 2020-05-25 23:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed0348dc-9d55-42d8-9086-0edceca7e503\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:43:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.50\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jcj8r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jcj8r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jcj8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:43:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:43:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:43:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:43:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.50,StartTime:2020-05-25 23:43:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 23:43:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://526d6a7659459ee138b9f76176fe2a62dc7ceba2888e3cb59325207e1ea4b7d8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:43:31.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7074" for this suite. • [SLOW TEST:11.189 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":19,"skipped":614,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:43:31.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 25 23:43:31.207: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix549663969/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:43:31.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9101" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":20,"skipped":620,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:43:31.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 25 23:43:31.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7187' May 25 23:43:32.160: INFO: stderr: "" May 25 23:43:32.160: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 23:43:32.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7187' May 25 23:43:32.308: INFO: stderr: "" May 25 23:43:32.308: INFO: stdout: "update-demo-nautilus-pn6tc update-demo-nautilus-znr48 " May 25 23:43:32.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pn6tc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7187' May 25 23:43:32.403: INFO: stderr: "" May 25 23:43:32.403: INFO: stdout: "" May 25 23:43:32.403: INFO: update-demo-nautilus-pn6tc is created but not running May 25 23:43:37.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7187' May 25 23:43:37.491: INFO: stderr: "" May 25 23:43:37.491: INFO: stdout: "update-demo-nautilus-pn6tc update-demo-nautilus-znr48 " May 25 23:43:37.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pn6tc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7187' May 25 23:43:37.587: INFO: stderr: "" May 25 23:43:37.587: INFO: stdout: "true" May 25 23:43:37.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pn6tc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7187' May 25 23:43:37.691: INFO: stderr: "" May 25 23:43:37.691: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 23:43:37.691: INFO: validating pod update-demo-nautilus-pn6tc May 25 23:43:37.695: INFO: got data: { "image": "nautilus.jpg" } May 25 23:43:37.695: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 23:43:37.695: INFO: update-demo-nautilus-pn6tc is verified up and running May 25 23:43:37.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znr48 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7187' May 25 23:43:37.807: INFO: stderr: "" May 25 23:43:37.807: INFO: stdout: "true" May 25 23:43:37.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znr48 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7187' May 25 23:43:37.929: INFO: stderr: "" May 25 23:43:37.929: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 23:43:37.929: INFO: validating pod update-demo-nautilus-znr48 May 25 23:43:37.939: INFO: got data: { "image": "nautilus.jpg" } May 25 23:43:37.939: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 23:43:37.939: INFO: update-demo-nautilus-znr48 is verified up and running STEP: scaling down the replication controller May 25 23:43:37.941: INFO: scanned /root for discovery docs: May 25 23:43:37.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7187' May 25 23:43:39.368: INFO: stderr: "" May 25 23:43:39.368: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 23:43:39.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7187' May 25 23:43:39.491: INFO: stderr: "" May 25 23:43:39.491: INFO: stdout: "update-demo-nautilus-pn6tc update-demo-nautilus-znr48 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 25 23:43:44.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7187' May 25 23:43:44.591: INFO: stderr: "" May 25 23:43:44.591: INFO: stdout: "update-demo-nautilus-pn6tc update-demo-nautilus-znr48 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 25 23:43:49.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7187' May 25 23:43:49.685: INFO: stderr: "" May 25 23:43:49.685: INFO: stdout: "update-demo-nautilus-znr48 " May 25 23:43:49.685: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znr48 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7187' May 25 23:43:49.788: INFO: stderr: "" May 25 23:43:49.788: INFO: stdout: "true" May 25 23:43:49.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znr48 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7187' May 25 23:43:49.890: INFO: stderr: "" May 25 23:43:49.890: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 23:43:49.890: INFO: validating pod update-demo-nautilus-znr48 May 25 23:43:49.893: INFO: got data: { "image": "nautilus.jpg" } May 25 23:43:49.893: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 23:43:49.893: INFO: update-demo-nautilus-znr48 is verified up and running STEP: scaling up the replication controller May 25 23:43:49.895: INFO: scanned /root for discovery docs: May 25 23:43:49.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7187' May 25 23:43:51.116: INFO: stderr: "" May 25 23:43:51.116: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 23:43:51.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7187' May 25 23:43:51.247: INFO: stderr: "" May 25 23:43:51.247: INFO: stdout: "update-demo-nautilus-7mqsh update-demo-nautilus-znr48 " May 25 23:43:51.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mqsh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7187' May 25 23:43:51.347: INFO: stderr: "" May 25 23:43:51.347: INFO: stdout: "" May 25 23:43:51.347: INFO: update-demo-nautilus-7mqsh is created but not running May 25 23:43:56.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7187' May 25 23:43:56.476: INFO: stderr: "" May 25 23:43:56.476: INFO: stdout: "update-demo-nautilus-7mqsh update-demo-nautilus-znr48 " May 25 23:43:56.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mqsh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7187' May 25 23:43:56.563: INFO: stderr: "" May 25 23:43:56.563: INFO: stdout: "true" May 25 23:43:56.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mqsh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7187' May 25 23:43:56.658: INFO: stderr: "" May 25 23:43:56.658: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 23:43:56.658: INFO: validating pod update-demo-nautilus-7mqsh May 25 23:43:56.662: INFO: got data: { "image": "nautilus.jpg" } May 25 23:43:56.662: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 23:43:56.662: INFO: update-demo-nautilus-7mqsh is verified up and running May 25 23:43:56.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znr48 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7187' May 25 23:43:56.753: INFO: stderr: "" May 25 23:43:56.753: INFO: stdout: "true" May 25 23:43:56.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znr48 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7187' May 25 23:43:56.848: INFO: stderr: "" May 25 23:43:56.848: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 23:43:56.848: INFO: validating pod update-demo-nautilus-znr48 May 25 23:43:56.851: INFO: got data: { "image": "nautilus.jpg" } May 25 23:43:56.851: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 23:43:56.851: INFO: update-demo-nautilus-znr48 is verified up and running STEP: using delete to clean up resources May 25 23:43:56.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7187' May 25 23:43:56.956: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 23:43:56.956: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 25 23:43:56.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7187' May 25 23:43:57.058: INFO: stderr: "No resources found in kubectl-7187 namespace.\n" May 25 23:43:57.058: INFO: stdout: "" May 25 23:43:57.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7187 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 23:43:57.167: INFO: stderr: "" May 25 23:43:57.167: INFO: stdout: "update-demo-nautilus-7mqsh\nupdate-demo-nautilus-znr48\n" May 25 23:43:57.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7187' May 25 23:43:57.773: INFO: stderr: "No resources found in kubectl-7187 namespace.\n" May 25 23:43:57.773: INFO: stdout: "" May 25 23:43:57.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7187 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 23:43:57.864: INFO: stderr: "" May 25 23:43:57.864: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:43:57.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7187" for this suite. • [SLOW TEST:26.555 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":21,"skipped":628,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:43:57.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-a8b21e02-209b-4d6e-bbd2-a6416bd57811 STEP: Creating a pod to test consume configMaps May 25 23:43:58.322: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-80ab3e72-4d8b-4abc-80b5-259dec20d396" in namespace "projected-1754" to be "Succeeded or Failed" May 25 23:43:58.361: INFO: Pod "pod-projected-configmaps-80ab3e72-4d8b-4abc-80b5-259dec20d396": Phase="Pending", Reason="", readiness=false. Elapsed: 38.823194ms May 25 23:44:00.364: INFO: Pod "pod-projected-configmaps-80ab3e72-4d8b-4abc-80b5-259dec20d396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041979943s May 25 23:44:02.375: INFO: Pod "pod-projected-configmaps-80ab3e72-4d8b-4abc-80b5-259dec20d396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052174056s STEP: Saw pod success May 25 23:44:02.375: INFO: Pod "pod-projected-configmaps-80ab3e72-4d8b-4abc-80b5-259dec20d396" satisfied condition "Succeeded or Failed" May 25 23:44:02.378: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-80ab3e72-4d8b-4abc-80b5-259dec20d396 container projected-configmap-volume-test: STEP: delete the pod May 25 23:44:02.602: INFO: Waiting for pod pod-projected-configmaps-80ab3e72-4d8b-4abc-80b5-259dec20d396 to disappear May 25 23:44:02.608: INFO: Pod pod-projected-configmaps-80ab3e72-4d8b-4abc-80b5-259dec20d396 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:44:02.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1754" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":22,"skipped":628,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:44:02.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 25 23:44:07.866: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:44:07.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8116" for this suite. • [SLOW TEST:5.407 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":23,"skipped":662,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:44:08.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 23:46:08.287: INFO: Deleting pod "var-expansion-b55acc09-f529-4e7d-a922-fe2b484b2423" in namespace "var-expansion-3318" May 25 23:46:08.292: INFO: Wait up to 5m0s for pod "var-expansion-b55acc09-f529-4e7d-a922-fe2b484b2423" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:46:10.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3318" for this suite. • [SLOW TEST:122.302 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":24,"skipped":670,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:46:10.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 23:46:11.258: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 23:46:13.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047171, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047171, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047171, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047171, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 23:46:15.423: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047171, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047171, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047171, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047171, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 23:46:18.435: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:46:18.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6424" for this suite. STEP: Destroying namespace "webhook-6424-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.246 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":25,"skipped":670,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:46:18.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:46:34.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4109" for this suite. • [SLOW TEST:16.309 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":26,"skipped":673,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:46:34.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:46:40.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7603" for this suite. • [SLOW TEST:5.507 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":27,"skipped":677,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:46:40.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 23:46:40.651: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-692e4744-88a8-4305-8527-7d91fad47c93" in namespace "security-context-test-1112" to be "Succeeded or Failed" May 25 23:46:40.654: INFO: Pod "busybox-privileged-false-692e4744-88a8-4305-8527-7d91fad47c93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.598347ms May 25 23:46:42.679: INFO: Pod "busybox-privileged-false-692e4744-88a8-4305-8527-7d91fad47c93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028408751s May 25 23:46:44.683: INFO: Pod "busybox-privileged-false-692e4744-88a8-4305-8527-7d91fad47c93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032740999s May 25 23:46:44.683: INFO: Pod "busybox-privileged-false-692e4744-88a8-4305-8527-7d91fad47c93" satisfied condition "Succeeded or Failed" May 25 23:46:44.702: INFO: Got logs for pod "busybox-privileged-false-692e4744-88a8-4305-8527-7d91fad47c93": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:46:44.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1112" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":28,"skipped":687,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:46:44.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-03ed8655-70bf-4e55-ae5d-d0797704eab5 STEP: Creating a pod to test consume secrets May 25 23:46:45.007: INFO: Waiting up to 5m0s for pod "pod-secrets-d560b068-4f17-4cf9-8f16-48fb95f4a171" in namespace "secrets-5059" to be "Succeeded or Failed" May 25 23:46:45.012: INFO: Pod "pod-secrets-d560b068-4f17-4cf9-8f16-48fb95f4a171": Phase="Pending", Reason="", readiness=false. Elapsed: 5.553236ms May 25 23:46:47.017: INFO: Pod "pod-secrets-d560b068-4f17-4cf9-8f16-48fb95f4a171": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00968082s May 25 23:46:49.021: INFO: Pod "pod-secrets-d560b068-4f17-4cf9-8f16-48fb95f4a171": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013697599s STEP: Saw pod success May 25 23:46:49.021: INFO: Pod "pod-secrets-d560b068-4f17-4cf9-8f16-48fb95f4a171" satisfied condition "Succeeded or Failed" May 25 23:46:49.023: INFO: Trying to get logs from node latest-worker pod pod-secrets-d560b068-4f17-4cf9-8f16-48fb95f4a171 container secret-volume-test: STEP: delete the pod May 25 23:46:49.081: INFO: Waiting for pod pod-secrets-d560b068-4f17-4cf9-8f16-48fb95f4a171 to disappear May 25 23:46:49.148: INFO: Pod pod-secrets-d560b068-4f17-4cf9-8f16-48fb95f4a171 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:46:49.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5059" for this suite. STEP: Destroying namespace "secret-namespace-3107" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":29,"skipped":699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:46:49.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 25 23:46:49.289: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 25 23:46:49.330: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 25 23:46:49.330: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 25 23:46:49.343: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 25 23:46:49.343: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 25 23:46:49.434: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 25 23:46:49.434: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 25 23:46:57.230: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:46:57.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-5380" for this suite. • [SLOW TEST:8.091 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":30,"skipped":765,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:46:57.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d74917ca-a4d0-4f48-9deb-6c78508b7135 STEP: Creating a pod to test consume secrets May 25 23:46:57.430: INFO: Waiting up to 5m0s for pod "pod-secrets-5218171f-7607-4193-91c5-872b07162788" in namespace "secrets-3612" to be "Succeeded or Failed" May 25 23:46:57.433: INFO: Pod "pod-secrets-5218171f-7607-4193-91c5-872b07162788": Phase="Pending", Reason="", readiness=false. Elapsed: 3.846509ms May 25 23:46:59.436: INFO: Pod "pod-secrets-5218171f-7607-4193-91c5-872b07162788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006850426s May 25 23:47:01.440: INFO: Pod "pod-secrets-5218171f-7607-4193-91c5-872b07162788": Phase="Running", Reason="", readiness=true. Elapsed: 4.010498803s May 25 23:47:03.460: INFO: Pod "pod-secrets-5218171f-7607-4193-91c5-872b07162788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030671238s STEP: Saw pod success May 25 23:47:03.460: INFO: Pod "pod-secrets-5218171f-7607-4193-91c5-872b07162788" satisfied condition "Succeeded or Failed" May 25 23:47:03.463: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5218171f-7607-4193-91c5-872b07162788 container secret-volume-test: STEP: delete the pod May 25 23:47:03.624: INFO: Waiting for pod pod-secrets-5218171f-7607-4193-91c5-872b07162788 to disappear May 25 23:47:03.661: INFO: Pod pod-secrets-5218171f-7607-4193-91c5-872b07162788 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:47:03.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3612" for this suite. • [SLOW TEST:6.421 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":31,"skipped":769,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:47:03.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-95hk STEP: Creating a pod to test atomic-volume-subpath May 25 23:47:04.221: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-95hk" in namespace "subpath-9811" to be "Succeeded or Failed" May 25 23:47:04.496: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Pending", Reason="", readiness=false. Elapsed: 274.916637ms May 25 23:47:06.562: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340998866s May 25 23:47:08.567: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Running", Reason="", readiness=true. Elapsed: 4.345464454s May 25 23:47:10.571: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Running", Reason="", readiness=true. Elapsed: 6.349961427s May 25 23:47:12.576: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Running", Reason="", readiness=true. Elapsed: 8.355029873s May 25 23:47:14.581: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Running", Reason="", readiness=true. Elapsed: 10.359918564s May 25 23:47:16.586: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Running", Reason="", readiness=true. Elapsed: 12.364475562s May 25 23:47:18.591: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Running", Reason="", readiness=true. Elapsed: 14.369371839s May 25 23:47:20.595: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Running", Reason="", readiness=true. Elapsed: 16.373576901s May 25 23:47:22.600: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Running", Reason="", readiness=true. Elapsed: 18.378544953s May 25 23:47:24.627: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Running", Reason="", readiness=true. Elapsed: 20.406084578s May 25 23:47:26.633: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Running", Reason="", readiness=true. Elapsed: 22.411142126s May 25 23:47:28.638: INFO: Pod "pod-subpath-test-secret-95hk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.416095543s STEP: Saw pod success May 25 23:47:28.638: INFO: Pod "pod-subpath-test-secret-95hk" satisfied condition "Succeeded or Failed" May 25 23:47:28.640: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-95hk container test-container-subpath-secret-95hk: STEP: delete the pod May 25 23:47:28.687: INFO: Waiting for pod pod-subpath-test-secret-95hk to disappear May 25 23:47:28.703: INFO: Pod pod-subpath-test-secret-95hk no longer exists STEP: Deleting pod pod-subpath-test-secret-95hk May 25 23:47:28.703: INFO: Deleting pod "pod-subpath-test-secret-95hk" in namespace "subpath-9811" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:47:28.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9811" for this suite. • [SLOW TEST:25.111 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":32,"skipped":792,"failed":0} S ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:47:28.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:47:28.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3749" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":33,"skipped":793,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:47:28.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-20f84b69-2d5a-4c31-86a3-3f58fd128b19 in namespace container-probe-6417 May 25 23:47:33.049: INFO: Started pod liveness-20f84b69-2d5a-4c31-86a3-3f58fd128b19 in namespace container-probe-6417 STEP: checking the pod's current state and verifying that restartCount is present May 25 23:47:33.052: INFO: Initial restart count of pod liveness-20f84b69-2d5a-4c31-86a3-3f58fd128b19 is 0 May 25 23:47:51.093: INFO: Restart count of pod container-probe-6417/liveness-20f84b69-2d5a-4c31-86a3-3f58fd128b19 is now 1 (18.041395834s elapsed) May 25 23:48:11.136: INFO: Restart count of pod container-probe-6417/liveness-20f84b69-2d5a-4c31-86a3-3f58fd128b19 is now 2 (38.084674884s elapsed) May 25 23:48:31.181: INFO: Restart count of pod container-probe-6417/liveness-20f84b69-2d5a-4c31-86a3-3f58fd128b19 is now 3 (58.129004526s elapsed) May 25 23:48:51.224: INFO: Restart count of pod container-probe-6417/liveness-20f84b69-2d5a-4c31-86a3-3f58fd128b19 is now 4 (1m18.172809917s elapsed) May 25 23:50:01.396: INFO: Restart count of pod container-probe-6417/liveness-20f84b69-2d5a-4c31-86a3-3f58fd128b19 is now 5 (2m28.344486875s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:50:01.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6417" for this suite. • [SLOW TEST:152.484 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":34,"skipped":813,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:50:01.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 25 23:50:01.859: INFO: Waiting up to 5m0s for pod "downward-api-ecf8b6e5-7397-4586-beed-aae526e8ca11" in namespace "downward-api-2861" to be "Succeeded or Failed" May 25 23:50:01.891: INFO: Pod "downward-api-ecf8b6e5-7397-4586-beed-aae526e8ca11": Phase="Pending", Reason="", readiness=false. Elapsed: 31.898058ms May 25 23:50:03.896: INFO: Pod "downward-api-ecf8b6e5-7397-4586-beed-aae526e8ca11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036590396s May 25 23:50:05.901: INFO: Pod "downward-api-ecf8b6e5-7397-4586-beed-aae526e8ca11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041588651s STEP: Saw pod success May 25 23:50:05.901: INFO: Pod "downward-api-ecf8b6e5-7397-4586-beed-aae526e8ca11" satisfied condition "Succeeded or Failed" May 25 23:50:05.905: INFO: Trying to get logs from node latest-worker pod downward-api-ecf8b6e5-7397-4586-beed-aae526e8ca11 container dapi-container: STEP: delete the pod May 25 23:50:06.188: INFO: Waiting for pod downward-api-ecf8b6e5-7397-4586-beed-aae526e8ca11 to disappear May 25 23:50:06.239: INFO: Pod downward-api-ecf8b6e5-7397-4586-beed-aae526e8ca11 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:50:06.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2861" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":35,"skipped":821,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:50:06.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 25 23:50:06.338: INFO: Waiting up to 5m0s for pod "pod-eb1fe9e2-fe1c-4b91-822e-9ebf95ea8948" in namespace "emptydir-4444" to be "Succeeded or Failed" May 25 23:50:06.372: INFO: Pod "pod-eb1fe9e2-fe1c-4b91-822e-9ebf95ea8948": Phase="Pending", Reason="", readiness=false. Elapsed: 34.289696ms May 25 23:50:08.376: INFO: Pod "pod-eb1fe9e2-fe1c-4b91-822e-9ebf95ea8948": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038521221s May 25 23:50:10.381: INFO: Pod "pod-eb1fe9e2-fe1c-4b91-822e-9ebf95ea8948": Phase="Running", Reason="", readiness=true. Elapsed: 4.043240205s May 25 23:50:12.386: INFO: Pod "pod-eb1fe9e2-fe1c-4b91-822e-9ebf95ea8948": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047879104s STEP: Saw pod success May 25 23:50:12.386: INFO: Pod "pod-eb1fe9e2-fe1c-4b91-822e-9ebf95ea8948" satisfied condition "Succeeded or Failed" May 25 23:50:12.389: INFO: Trying to get logs from node latest-worker pod pod-eb1fe9e2-fe1c-4b91-822e-9ebf95ea8948 container test-container: STEP: delete the pod May 25 23:50:12.456: INFO: Waiting for pod pod-eb1fe9e2-fe1c-4b91-822e-9ebf95ea8948 to disappear May 25 23:50:12.475: INFO: Pod pod-eb1fe9e2-fe1c-4b91-822e-9ebf95ea8948 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:50:12.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4444" for this suite. • [SLOW TEST:6.204 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":36,"skipped":825,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:50:12.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5008 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 23:50:12.539: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 25 23:50:12.615: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 23:50:14.619: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 23:50:16.619: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 23:50:18.619: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 23:50:20.619: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 23:50:22.619: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 23:50:24.619: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 23:50:26.619: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 23:50:28.619: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 23:50:30.619: INFO: The status of Pod netserver-0 is Running (Ready = true) May 25 23:50:30.626: INFO: The status of Pod netserver-1 is Running (Ready = false) May 25 23:50:32.630: INFO: The status of Pod netserver-1 is Running (Ready = false) May 25 23:50:34.630: INFO: The status of Pod netserver-1 is Running (Ready = false) May 25 23:50:36.630: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 25 23:50:40.662: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:8080/dial?request=hostname&protocol=http&host=10.244.1.62&port=8080&tries=1'] Namespace:pod-network-test-5008 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 23:50:40.662: INFO: >>> kubeConfig: /root/.kube/config I0525 23:50:40.711906 7 log.go:172] (0xc002b8c370) (0xc001bae3c0) Create stream I0525 23:50:40.711944 7 log.go:172] (0xc002b8c370) (0xc001bae3c0) Stream added, broadcasting: 1 I0525 23:50:40.715258 7 log.go:172] (0xc002b8c370) Reply frame received for 1 I0525 23:50:40.715327 7 log.go:172] (0xc002b8c370) (0xc001bae500) Create stream I0525 23:50:40.715358 7 log.go:172] (0xc002b8c370) (0xc001bae500) Stream added, broadcasting: 3 I0525 23:50:40.716358 7 log.go:172] (0xc002b8c370) Reply frame received for 3 I0525 23:50:40.716389 7 log.go:172] (0xc002b8c370) (0xc0024fa000) Create stream I0525 23:50:40.716400 7 log.go:172] (0xc002b8c370) (0xc0024fa000) Stream added, broadcasting: 5 I0525 23:50:40.717486 7 log.go:172] (0xc002b8c370) Reply frame received for 5 I0525 23:50:40.828110 7 log.go:172] (0xc002b8c370) Data frame received for 3 I0525 23:50:40.828144 7 log.go:172] (0xc001bae500) (3) Data frame handling I0525 23:50:40.828162 7 log.go:172] (0xc001bae500) (3) Data frame sent I0525 23:50:40.828785 7 log.go:172] (0xc002b8c370) Data frame received for 5 I0525 23:50:40.828835 7 log.go:172] (0xc0024fa000) (5) Data frame handling I0525 23:50:40.828880 7 log.go:172] (0xc002b8c370) Data frame received for 3 I0525 23:50:40.828903 7 log.go:172] (0xc001bae500) (3) Data frame handling I0525 23:50:40.831247 7 log.go:172] (0xc002b8c370) Data frame received for 1 I0525 23:50:40.831270 7 log.go:172] (0xc001bae3c0) (1) Data frame handling I0525 23:50:40.831288 7 log.go:172] (0xc001bae3c0) (1) Data frame sent I0525 23:50:40.831313 7 log.go:172] (0xc002b8c370) (0xc001bae3c0) Stream removed, broadcasting: 1 I0525 23:50:40.831330 7 log.go:172] (0xc002b8c370) Go away received I0525 23:50:40.831748 7 log.go:172] (0xc002b8c370) (0xc001bae3c0) Stream removed, broadcasting: 1 I0525 23:50:40.831781 7 log.go:172] (0xc002b8c370) (0xc001bae500) Stream removed, broadcasting: 3 I0525 23:50:40.831803 7 log.go:172] (0xc002b8c370) (0xc0024fa000) Stream removed, broadcasting: 5 May 25 23:50:40.831: INFO: Waiting for responses: map[] May 25 23:50:40.835: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:8080/dial?request=hostname&protocol=http&host=10.244.2.61&port=8080&tries=1'] Namespace:pod-network-test-5008 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 23:50:40.835: INFO: >>> kubeConfig: /root/.kube/config I0525 23:50:40.867833 7 log.go:172] (0xc002fe22c0) (0xc002482640) Create stream I0525 23:50:40.867859 7 log.go:172] (0xc002fe22c0) (0xc002482640) Stream added, broadcasting: 1 I0525 23:50:40.871157 7 log.go:172] (0xc002fe22c0) Reply frame received for 1 I0525 23:50:40.871200 7 log.go:172] (0xc002fe22c0) (0xc001baea00) Create stream I0525 23:50:40.871217 7 log.go:172] (0xc002fe22c0) (0xc001baea00) Stream added, broadcasting: 3 I0525 23:50:40.872370 7 log.go:172] (0xc002fe22c0) Reply frame received for 3 I0525 23:50:40.872427 7 log.go:172] (0xc002fe22c0) (0xc0024826e0) Create stream I0525 23:50:40.872455 7 log.go:172] (0xc002fe22c0) (0xc0024826e0) Stream added, broadcasting: 5 I0525 23:50:40.873784 7 log.go:172] (0xc002fe22c0) Reply frame received for 5 I0525 23:50:40.939085 7 log.go:172] (0xc002fe22c0) Data frame received for 3 I0525 23:50:40.939111 7 log.go:172] (0xc001baea00) (3) Data frame handling I0525 23:50:40.939127 7 log.go:172] (0xc001baea00) (3) Data frame sent I0525 23:50:40.939590 7 log.go:172] (0xc002fe22c0) Data frame received for 5 I0525 23:50:40.939637 7 log.go:172] (0xc0024826e0) (5) Data frame handling I0525 23:50:40.939764 7 log.go:172] (0xc002fe22c0) Data frame received for 3 I0525 23:50:40.939788 7 log.go:172] (0xc001baea00) (3) Data frame handling I0525 23:50:40.941854 7 log.go:172] (0xc002fe22c0) Data frame received for 1 I0525 23:50:40.941904 7 log.go:172] (0xc002482640) (1) Data frame handling I0525 23:50:40.941918 7 log.go:172] (0xc002482640) (1) Data frame sent I0525 23:50:40.941929 7 log.go:172] (0xc002fe22c0) (0xc002482640) Stream removed, broadcasting: 1 I0525 23:50:40.941939 7 log.go:172] (0xc002fe22c0) Go away received I0525 23:50:40.942145 7 log.go:172] (0xc002fe22c0) (0xc002482640) Stream removed, broadcasting: 1 I0525 23:50:40.942161 7 log.go:172] (0xc002fe22c0) (0xc001baea00) Stream removed, broadcasting: 3 I0525 23:50:40.942167 7 log.go:172] (0xc002fe22c0) (0xc0024826e0) Stream removed, broadcasting: 5 May 25 23:50:40.942: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:50:40.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5008" for this suite. • [SLOW TEST:28.467 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":37,"skipped":842,"failed":0} SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:50:40.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 25 23:50:51.137: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9008 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 23:50:51.137: INFO: >>> kubeConfig: /root/.kube/config I0525 23:50:51.178119 7 log.go:172] (0xc002b8cb00) (0xc002a02320) Create stream I0525 23:50:51.178155 7 log.go:172] (0xc002b8cb00) (0xc002a02320) Stream added, broadcasting: 1 I0525 23:50:51.181878 7 log.go:172] (0xc002b8cb00) Reply frame received for 1 I0525 23:50:51.181957 7 log.go:172] (0xc002b8cb00) (0xc0024fa320) Create stream I0525 23:50:51.181983 7 log.go:172] (0xc002b8cb00) (0xc0024fa320) Stream added, broadcasting: 3 I0525 23:50:51.183071 7 log.go:172] (0xc002b8cb00) Reply frame received for 3 I0525 23:50:51.183129 7 log.go:172] (0xc002b8cb00) (0xc0020a8000) Create stream I0525 23:50:51.183148 7 log.go:172] (0xc002b8cb00) (0xc0020a8000) Stream added, broadcasting: 5 I0525 23:50:51.184110 7 log.go:172] (0xc002b8cb00) Reply frame received for 5 I0525 23:50:51.261471 7 log.go:172] (0xc002b8cb00) Data frame received for 5 I0525 23:50:51.261496 7 log.go:172] (0xc0020a8000) (5) Data frame handling I0525 23:50:51.261517 7 log.go:172] (0xc002b8cb00) Data frame received for 3 I0525 23:50:51.261524 7 log.go:172] (0xc0024fa320) (3) Data frame handling I0525 23:50:51.261532 7 log.go:172] (0xc0024fa320) (3) Data frame sent I0525 23:50:51.261540 7 log.go:172] (0xc002b8cb00) Data frame received for 3 I0525 23:50:51.261547 7 log.go:172] (0xc0024fa320) (3) Data frame handling I0525 23:50:51.262723 7 log.go:172] (0xc002b8cb00) Data frame received for 1 I0525 23:50:51.262743 7 log.go:172] (0xc002a02320) (1) Data frame handling I0525 23:50:51.262760 7 log.go:172] (0xc002a02320) (1) Data frame sent I0525 23:50:51.262777 7 log.go:172] (0xc002b8cb00) (0xc002a02320) Stream removed, broadcasting: 1 I0525 23:50:51.262791 7 log.go:172] (0xc002b8cb00) Go away received I0525 23:50:51.262898 7 log.go:172] (0xc002b8cb00) (0xc002a02320) Stream removed, broadcasting: 1 I0525 23:50:51.262920 7 log.go:172] (0xc002b8cb00) (0xc0024fa320) Stream removed, broadcasting: 3 I0525 23:50:51.262926 7 log.go:172] (0xc002b8cb00) (0xc0020a8000) Stream removed, broadcasting: 5 May 25 23:50:51.262: INFO: Exec stderr: "" May 25 23:50:51.262: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9008 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 23:50:51.262: INFO: >>> kubeConfig: /root/.kube/config I0525 23:50:51.285624 7 log.go:172] (0xc002fe28f0) (0xc002483b80) Create stream I0525 23:50:51.285649 7 log.go:172] (0xc002fe28f0) (0xc002483b80) Stream added, broadcasting: 1 I0525 23:50:51.288347 7 log.go:172] (0xc002fe28f0) Reply frame received for 1 I0525 23:50:51.288374 7 log.go:172] (0xc002fe28f0) (0xc001b0b400) Create stream I0525 23:50:51.288387 7 log.go:172] (0xc002fe28f0) (0xc001b0b400) Stream added, broadcasting: 3 I0525 23:50:51.289469 7 log.go:172] (0xc002fe28f0) Reply frame received for 3 I0525 23:50:51.289510 7 log.go:172] (0xc002fe28f0) (0xc001b0b4a0) Create stream I0525 23:50:51.289525 7 log.go:172] (0xc002fe28f0) (0xc001b0b4a0) Stream added, broadcasting: 5 I0525 23:50:51.290527 7 log.go:172] (0xc002fe28f0) Reply frame received for 5 I0525 23:50:51.351634 7 log.go:172] (0xc002fe28f0) Data frame received for 3 I0525 23:50:51.351674 7 log.go:172] (0xc001b0b400) (3) Data frame handling I0525 23:50:51.351695 7 log.go:172] (0xc001b0b400) (3) Data frame sent I0525 23:50:51.351711 7 log.go:172] (0xc002fe28f0) Data frame received for 3 I0525 23:50:51.351723 7 log.go:172] (0xc001b0b400) (3) Data frame handling I0525 23:50:51.351774 7 log.go:172] (0xc002fe28f0) Data frame received for 5 I0525 23:50:51.351818 7 log.go:172] (0xc001b0b4a0) (5) Data frame handling I0525 23:50:51.352807 7 log.go:172] (0xc002fe28f0) Data frame received for 1 I0525 23:50:51.352824 7 log.go:172] (0xc002483b80) (1) Data frame handling I0525 23:50:51.352837 7 log.go:172] (0xc002483b80) (1) Data frame sent I0525 23:50:51.352972 7 log.go:172] (0xc002fe28f0) (0xc002483b80) Stream removed, broadcasting: 1 I0525 23:50:51.353024 7 log.go:172] (0xc002fe28f0) (0xc002483b80) Stream removed, broadcasting: 1 I0525 23:50:51.353034 7 log.go:172] (0xc002fe28f0) (0xc001b0b400) Stream removed, broadcasting: 3 I0525 23:50:51.353039 7 log.go:172] (0xc002fe28f0) (0xc001b0b4a0) Stream removed, broadcasting: 5 May 25 23:50:51.353: INFO: Exec stderr: "" May 25 23:50:51.353: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9008 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 23:50:51.353: INFO: >>> kubeConfig: /root/.kube/config I0525 23:50:51.353421 7 log.go:172] (0xc002fe28f0) Go away received I0525 23:50:51.387263 7 log.go:172] (0xc002a711e0) (0xc0024fa820) Create stream I0525 23:50:51.387292 7 log.go:172] (0xc002a711e0) (0xc0024fa820) Stream added, broadcasting: 1 I0525 23:50:51.390656 7 log.go:172] (0xc002a711e0) Reply frame received for 1 I0525 23:50:51.390705 7 log.go:172] (0xc002a711e0) (0xc0020a80a0) Create stream I0525 23:50:51.390725 7 log.go:172] (0xc002a711e0) (0xc0020a80a0) Stream added, broadcasting: 3 I0525 23:50:51.391660 7 log.go:172] (0xc002a711e0) Reply frame received for 3 I0525 23:50:51.391704 7 log.go:172] (0xc002a711e0) (0xc001b0b540) Create stream I0525 23:50:51.391721 7 log.go:172] (0xc002a711e0) (0xc001b0b540) Stream added, broadcasting: 5 I0525 23:50:51.392605 7 log.go:172] (0xc002a711e0) Reply frame received for 5 I0525 23:50:51.462869 7 log.go:172] (0xc002a711e0) Data frame received for 5 I0525 23:50:51.462896 7 log.go:172] (0xc001b0b540) (5) Data frame handling I0525 23:50:51.462913 7 log.go:172] (0xc002a711e0) Data frame received for 3 I0525 23:50:51.462921 7 log.go:172] (0xc0020a80a0) (3) Data frame handling I0525 23:50:51.462928 7 log.go:172] (0xc0020a80a0) (3) Data frame sent I0525 23:50:51.463320 7 log.go:172] (0xc002a711e0) Data frame received for 3 I0525 23:50:51.463342 7 log.go:172] (0xc0020a80a0) (3) Data frame handling I0525 23:50:51.464732 7 log.go:172] (0xc002a711e0) Data frame received for 1 I0525 23:50:51.464746 7 log.go:172] (0xc0024fa820) (1) Data frame handling I0525 23:50:51.464759 7 log.go:172] (0xc0024fa820) (1) Data frame sent I0525 23:50:51.464782 7 log.go:172] (0xc002a711e0) (0xc0024fa820) Stream removed, broadcasting: 1 I0525 23:50:51.464818 7 log.go:172] (0xc002a711e0) Go away received I0525 23:50:51.464874 7 log.go:172] (0xc002a711e0) (0xc0024fa820) Stream removed, broadcasting: 1 I0525 23:50:51.464893 7 log.go:172] (0xc002a711e0) (0xc0020a80a0) Stream removed, broadcasting: 3 I0525 23:50:51.464905 7 log.go:172] (0xc002a711e0) (0xc001b0b540) Stream removed, broadcasting: 5 May 25 23:50:51.464: INFO: Exec stderr: "" May 25 23:50:51.464: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9008 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 23:50:51.464: INFO: >>> kubeConfig: /root/.kube/config I0525 23:50:51.492538 7 log.go:172] (0xc00300ea50) (0xc001b0bb80) Create stream I0525 23:50:51.492684 7 log.go:172] (0xc00300ea50) (0xc001b0bb80) Stream added, broadcasting: 1 I0525 23:50:51.496373 7 log.go:172] (0xc00300ea50) Reply frame received for 1 I0525 23:50:51.496412 7 log.go:172] (0xc00300ea50) (0xc0024fa960) Create stream I0525 23:50:51.496425 7 log.go:172] (0xc00300ea50) (0xc0024fa960) Stream added, broadcasting: 3 I0525 23:50:51.498232 7 log.go:172] (0xc00300ea50) Reply frame received for 3 I0525 23:50:51.498294 7 log.go:172] (0xc00300ea50) (0xc0020a8140) Create stream I0525 23:50:51.498308 7 log.go:172] (0xc00300ea50) (0xc0020a8140) Stream added, broadcasting: 5 I0525 23:50:51.499091 7 log.go:172] (0xc00300ea50) Reply frame received for 5 I0525 23:50:51.578292 7 log.go:172] (0xc00300ea50) Data frame received for 3 I0525 23:50:51.578322 7 log.go:172] (0xc0024fa960) (3) Data frame handling I0525 23:50:51.578340 7 log.go:172] (0xc0024fa960) (3) Data frame sent I0525 23:50:51.578351 7 log.go:172] (0xc00300ea50) Data frame received for 3 I0525 23:50:51.578360 7 log.go:172] (0xc0024fa960) (3) Data frame handling I0525 23:50:51.578377 7 log.go:172] (0xc00300ea50) Data frame received for 5 I0525 23:50:51.578387 7 log.go:172] (0xc0020a8140) (5) Data frame handling I0525 23:50:51.579583 7 log.go:172] (0xc00300ea50) Data frame received for 1 I0525 23:50:51.579598 7 log.go:172] (0xc001b0bb80) (1) Data frame handling I0525 23:50:51.579614 7 log.go:172] (0xc001b0bb80) (1) Data frame sent I0525 23:50:51.579634 7 log.go:172] (0xc00300ea50) (0xc001b0bb80) Stream removed, broadcasting: 1 I0525 23:50:51.579712 7 log.go:172] (0xc00300ea50) Go away received I0525 23:50:51.579739 7 log.go:172] (0xc00300ea50) (0xc001b0bb80) Stream removed, broadcasting: 1 I0525 23:50:51.579752 7 log.go:172] (0xc00300ea50) (0xc0024fa960) Stream removed, broadcasting: 3 I0525 23:50:51.579762 7 log.go:172] (0xc00300ea50) (0xc0020a8140) Stream removed, broadcasting: 5 May 25 23:50:51.579: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 25 23:50:51.579: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9008 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 23:50:51.579: INFO: >>> kubeConfig: /root/.kube/config I0525 23:50:51.607937 7 log.go:172] (0xc002a71810) (0xc0024fabe0) Create stream I0525 23:50:51.607958 7 log.go:172] (0xc002a71810) (0xc0024fabe0) Stream added, broadcasting: 1 I0525 23:50:51.610775 7 log.go:172] (0xc002a71810) Reply frame received for 1 I0525 23:50:51.610819 7 log.go:172] (0xc002a71810) (0xc002a02500) Create stream I0525 23:50:51.610834 7 log.go:172] (0xc002a71810) (0xc002a02500) Stream added, broadcasting: 3 I0525 23:50:51.611725 7 log.go:172] (0xc002a71810) Reply frame received for 3 I0525 23:50:51.611768 7 log.go:172] (0xc002a71810) (0xc0020a81e0) Create stream I0525 23:50:51.611790 7 log.go:172] (0xc002a71810) (0xc0020a81e0) Stream added, broadcasting: 5 I0525 23:50:51.612701 7 log.go:172] (0xc002a71810) Reply frame received for 5 I0525 23:50:51.673462 7 log.go:172] (0xc002a71810) Data frame received for 5 I0525 23:50:51.673484 7 log.go:172] (0xc0020a81e0) (5) Data frame handling I0525 23:50:51.673539 7 log.go:172] (0xc002a71810) Data frame received for 3 I0525 23:50:51.673583 7 log.go:172] (0xc002a02500) (3) Data frame handling I0525 23:50:51.673657 7 log.go:172] (0xc002a02500) (3) Data frame sent I0525 23:50:51.673695 7 log.go:172] (0xc002a71810) Data frame received for 3 I0525 23:50:51.673708 7 log.go:172] (0xc002a02500) (3) Data frame handling I0525 23:50:51.675325 7 log.go:172] (0xc002a71810) Data frame received for 1 I0525 23:50:51.675374 7 log.go:172] (0xc0024fabe0) (1) Data frame handling I0525 23:50:51.675405 7 log.go:172] (0xc0024fabe0) (1) Data frame sent I0525 23:50:51.675437 7 log.go:172] (0xc002a71810) (0xc0024fabe0) Stream removed, broadcasting: 1 I0525 23:50:51.675490 7 log.go:172] (0xc002a71810) Go away received I0525 23:50:51.675617 7 log.go:172] (0xc002a71810) (0xc0024fabe0) Stream removed, broadcasting: 1 I0525 23:50:51.675665 7 log.go:172] (0xc002a71810) (0xc002a02500) Stream removed, broadcasting: 3 I0525 23:50:51.675692 7 log.go:172] (0xc002a71810) (0xc0020a81e0) Stream removed, broadcasting: 5 May 25 23:50:51.675: INFO: Exec stderr: "" May 25 23:50:51.675: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9008 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 23:50:51.675: INFO: >>> kubeConfig: /root/.kube/config I0525 23:50:51.707320 7 log.go:172] (0xc002a71e40) (0xc0024fafa0) Create stream I0525 23:50:51.707359 7 log.go:172] (0xc002a71e40) (0xc0024fafa0) Stream added, broadcasting: 1 I0525 23:50:51.710569 7 log.go:172] (0xc002a71e40) Reply frame received for 1 I0525 23:50:51.710619 7 log.go:172] (0xc002a71e40) (0xc002a025a0) Create stream I0525 23:50:51.710636 7 log.go:172] (0xc002a71e40) (0xc002a025a0) Stream added, broadcasting: 3 I0525 23:50:51.711588 7 log.go:172] (0xc002a71e40) Reply frame received for 3 I0525 23:50:51.711626 7 log.go:172] (0xc002a71e40) (0xc0020a8280) Create stream I0525 23:50:51.711641 7 log.go:172] (0xc002a71e40) (0xc0020a8280) Stream added, broadcasting: 5 I0525 23:50:51.712568 7 log.go:172] (0xc002a71e40) Reply frame received for 5 I0525 23:50:51.778176 7 log.go:172] (0xc002a71e40) Data frame received for 5 I0525 23:50:51.778384 7 log.go:172] (0xc0020a8280) (5) Data frame handling I0525 23:50:51.778424 7 log.go:172] (0xc002a71e40) Data frame received for 3 I0525 23:50:51.778468 7 log.go:172] (0xc002a025a0) (3) Data frame handling I0525 23:50:51.778595 7 log.go:172] (0xc002a025a0) (3) Data frame sent I0525 23:50:51.778610 7 log.go:172] (0xc002a71e40) Data frame received for 3 I0525 23:50:51.778623 7 log.go:172] (0xc002a025a0) (3) Data frame handling I0525 23:50:51.779855 7 log.go:172] (0xc002a71e40) Data frame received for 1 I0525 23:50:51.779878 7 log.go:172] (0xc0024fafa0) (1) Data frame handling I0525 23:50:51.779899 7 log.go:172] (0xc0024fafa0) (1) Data frame sent I0525 23:50:51.779923 7 log.go:172] (0xc002a71e40) (0xc0024fafa0) Stream removed, broadcasting: 1 I0525 23:50:51.779939 7 log.go:172] (0xc002a71e40) Go away received I0525 23:50:51.780164 7 log.go:172] (0xc002a71e40) (0xc0024fafa0) Stream removed, broadcasting: 1 I0525 23:50:51.780185 7 log.go:172] (0xc002a71e40) (0xc002a025a0) Stream removed, broadcasting: 3 I0525 23:50:51.780195 7 log.go:172] (0xc002a71e40) (0xc0020a8280) Stream removed, broadcasting: 5 May 25 23:50:51.780: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 25 23:50:51.780: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9008 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 23:50:51.780: INFO: >>> kubeConfig: /root/.kube/config I0525 23:50:51.814473 7 log.go:172] (0xc002918000) (0xc0024fb180) Create stream I0525 23:50:51.814517 7 log.go:172] (0xc002918000) (0xc0024fb180) Stream added, broadcasting: 1 I0525 23:50:51.817104 7 log.go:172] (0xc002918000) Reply frame received for 1 I0525 23:50:51.817343 7 log.go:172] (0xc002918000) (0xc001b0bcc0) Create stream I0525 23:50:51.817361 7 log.go:172] (0xc002918000) (0xc001b0bcc0) Stream added, broadcasting: 3 I0525 23:50:51.818573 7 log.go:172] (0xc002918000) Reply frame received for 3 I0525 23:50:51.818612 7 log.go:172] (0xc002918000) (0xc0020a83c0) Create stream I0525 23:50:51.818627 7 log.go:172] (0xc002918000) (0xc0020a83c0) Stream added, broadcasting: 5 I0525 23:50:51.819591 7 log.go:172] (0xc002918000) Reply frame received for 5 I0525 23:50:51.881495 7 log.go:172] (0xc002918000) Data frame received for 3 I0525 23:50:51.881524 7 log.go:172] (0xc001b0bcc0) (3) Data frame handling I0525 23:50:51.881531 7 log.go:172] (0xc001b0bcc0) (3) Data frame sent I0525 23:50:51.881537 7 log.go:172] (0xc002918000) Data frame received for 3 I0525 23:50:51.881541 7 log.go:172] (0xc001b0bcc0) (3) Data frame handling I0525 23:50:51.881578 7 log.go:172] (0xc002918000) Data frame received for 5 I0525 23:50:51.881596 7 log.go:172] (0xc0020a83c0) (5) Data frame handling I0525 23:50:51.883257 7 log.go:172] (0xc002918000) Data frame received for 1 I0525 23:50:51.883281 7 log.go:172] (0xc0024fb180) (1) Data frame handling I0525 23:50:51.883298 7 log.go:172] (0xc0024fb180) (1) Data frame sent I0525 23:50:51.883359 7 log.go:172] (0xc002918000) (0xc0024fb180) Stream removed, broadcasting: 1 I0525 23:50:51.883377 7 log.go:172] (0xc002918000) Go away received I0525 23:50:51.883504 7 log.go:172] (0xc002918000) (0xc0024fb180) Stream removed, broadcasting: 1 I0525 23:50:51.883546 7 log.go:172] (0xc002918000) (0xc001b0bcc0) Stream removed, broadcasting: 3 I0525 23:50:51.883567 7 log.go:172] (0xc002918000) (0xc0020a83c0) Stream removed, broadcasting: 5 May 25 23:50:51.883: INFO: Exec stderr: "" May 25 23:50:51.883: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9008 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 23:50:51.883: INFO: >>> kubeConfig: /root/.kube/config I0525 23:50:51.931548 7 log.go:172] (0xc002fe2f20) (0xc002483e00) Create stream I0525 23:50:51.931579 7 log.go:172] (0xc002fe2f20) (0xc002483e00) Stream added, broadcasting: 1 I0525 23:50:51.933970 7 log.go:172] (0xc002fe2f20) Reply frame received for 1 I0525 23:50:51.934023 7 log.go:172] (0xc002fe2f20) (0xc002a02640) Create stream I0525 23:50:51.934043 7 log.go:172] (0xc002fe2f20) (0xc002a02640) Stream added, broadcasting: 3 I0525 23:50:51.935134 7 log.go:172] (0xc002fe2f20) Reply frame received for 3 I0525 23:50:51.935161 7 log.go:172] (0xc002fe2f20) (0xc0024fb2c0) Create stream I0525 23:50:51.935182 7 log.go:172] (0xc002fe2f20) (0xc0024fb2c0) Stream added, broadcasting: 5 I0525 23:50:51.936240 7 log.go:172] (0xc002fe2f20) Reply frame received for 5 I0525 23:50:52.004843 7 log.go:172] (0xc002fe2f20) Data frame received for 3 I0525 23:50:52.004893 7 log.go:172] (0xc002a02640) (3) Data frame handling I0525 23:50:52.004909 7 log.go:172] (0xc002a02640) (3) Data frame sent I0525 23:50:52.004920 7 log.go:172] (0xc002fe2f20) Data frame received for 3 I0525 23:50:52.004928 7 log.go:172] (0xc002a02640) (3) Data frame handling I0525 23:50:52.004961 7 log.go:172] (0xc002fe2f20) Data frame received for 5 I0525 23:50:52.004972 7 log.go:172] (0xc0024fb2c0) (5) Data frame handling I0525 23:50:52.006301 7 log.go:172] (0xc002fe2f20) Data frame received for 1 I0525 23:50:52.006322 7 log.go:172] (0xc002483e00) (1) Data frame handling I0525 23:50:52.006362 7 log.go:172] (0xc002483e00) (1) Data frame sent I0525 23:50:52.006382 7 log.go:172] (0xc002fe2f20) (0xc002483e00) Stream removed, broadcasting: 1 I0525 23:50:52.006402 7 log.go:172] (0xc002fe2f20) Go away received I0525 23:50:52.006536 7 log.go:172] (0xc002fe2f20) (0xc002483e00) Stream removed, broadcasting: 1 I0525 23:50:52.006559 7 log.go:172] (0xc002fe2f20) (0xc002a02640) Stream removed, broadcasting: 3 I0525 23:50:52.006576 7 log.go:172] (0xc002fe2f20) (0xc0024fb2c0) Stream removed, broadcasting: 5 May 25 23:50:52.006: INFO: Exec stderr: "" May 25 23:50:52.006: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9008 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 23:50:52.006: INFO: >>> kubeConfig: /root/.kube/config I0525 23:50:52.034027 7 log.go:172] (0xc00300f080) (0xc002ae81e0) Create stream I0525 23:50:52.034053 7 log.go:172] (0xc00300f080) (0xc002ae81e0) Stream added, broadcasting: 1 I0525 23:50:52.036643 7 log.go:172] (0xc00300f080) Reply frame received for 1 I0525 23:50:52.036700 7 log.go:172] (0xc00300f080) (0xc002ae8280) Create stream I0525 23:50:52.036718 7 log.go:172] (0xc00300f080) (0xc002ae8280) Stream added, broadcasting: 3 I0525 23:50:52.037966 7 log.go:172] (0xc00300f080) Reply frame received for 3 I0525 23:50:52.038009 7 log.go:172] (0xc00300f080) (0xc002a026e0) Create stream I0525 23:50:52.038027 7 log.go:172] (0xc00300f080) (0xc002a026e0) Stream added, broadcasting: 5 I0525 23:50:52.039097 7 log.go:172] (0xc00300f080) Reply frame received for 5 I0525 23:50:52.110391 7 log.go:172] (0xc00300f080) Data frame received for 5 I0525 23:50:52.110439 7 log.go:172] (0xc002a026e0) (5) Data frame handling I0525 23:50:52.110480 7 log.go:172] (0xc00300f080) Data frame received for 3 I0525 23:50:52.110502 7 log.go:172] (0xc002ae8280) (3) Data frame handling I0525 23:50:52.110536 7 log.go:172] (0xc002ae8280) (3) Data frame sent I0525 23:50:52.110559 7 log.go:172] (0xc00300f080) Data frame received for 3 I0525 23:50:52.110572 7 log.go:172] (0xc002ae8280) (3) Data frame handling I0525 23:50:52.112083 7 log.go:172] (0xc00300f080) Data frame received for 1 I0525 23:50:52.112098 7 log.go:172] (0xc002ae81e0) (1) Data frame handling I0525 23:50:52.112125 7 log.go:172] (0xc002ae81e0) (1) Data frame sent I0525 23:50:52.112139 7 log.go:172] (0xc00300f080) (0xc002ae81e0) Stream removed, broadcasting: 1 I0525 23:50:52.112217 7 log.go:172] (0xc00300f080) (0xc002ae81e0) Stream removed, broadcasting: 1 I0525 23:50:52.112251 7 log.go:172] (0xc00300f080) (0xc002ae8280) Stream removed, broadcasting: 3 I0525 23:50:52.112269 7 log.go:172] (0xc00300f080) (0xc002a026e0) Stream removed, broadcasting: 5 May 25 23:50:52.112: INFO: Exec stderr: "" May 25 23:50:52.112: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9008 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 23:50:52.112: INFO: >>> kubeConfig: /root/.kube/config I0525 23:50:52.112346 7 log.go:172] (0xc00300f080) Go away received I0525 23:50:52.162496 7 log.go:172] (0xc002b8d130) (0xc002a028c0) Create stream I0525 23:50:52.162528 7 log.go:172] (0xc002b8d130) (0xc002a028c0) Stream added, broadcasting: 1 I0525 23:50:52.165363 7 log.go:172] (0xc002b8d130) Reply frame received for 1 I0525 23:50:52.165396 7 log.go:172] (0xc002b8d130) (0xc002ae8320) Create stream I0525 23:50:52.165404 7 log.go:172] (0xc002b8d130) (0xc002ae8320) Stream added, broadcasting: 3 I0525 23:50:52.166292 7 log.go:172] (0xc002b8d130) Reply frame received for 3 I0525 23:50:52.166346 7 log.go:172] (0xc002b8d130) (0xc002a02a00) Create stream I0525 23:50:52.166370 7 log.go:172] (0xc002b8d130) (0xc002a02a00) Stream added, broadcasting: 5 I0525 23:50:52.167286 7 log.go:172] (0xc002b8d130) Reply frame received for 5 I0525 23:50:52.231088 7 log.go:172] (0xc002b8d130) Data frame received for 5 I0525 23:50:52.231189 7 log.go:172] (0xc002a02a00) (5) Data frame handling I0525 23:50:52.231227 7 log.go:172] (0xc002b8d130) Data frame received for 3 I0525 23:50:52.231243 7 log.go:172] (0xc002ae8320) (3) Data frame handling I0525 23:50:52.231254 7 log.go:172] (0xc002ae8320) (3) Data frame sent I0525 23:50:52.231344 7 log.go:172] (0xc002b8d130) Data frame received for 3 I0525 23:50:52.231391 7 log.go:172] (0xc002ae8320) (3) Data frame handling I0525 23:50:52.232907 7 log.go:172] (0xc002b8d130) Data frame received for 1 I0525 23:50:52.232930 7 log.go:172] (0xc002a028c0) (1) Data frame handling I0525 23:50:52.232945 7 log.go:172] (0xc002a028c0) (1) Data frame sent I0525 23:50:52.232963 7 log.go:172] (0xc002b8d130) (0xc002a028c0) Stream removed, broadcasting: 1 I0525 23:50:52.233049 7 log.go:172] (0xc002b8d130) (0xc002a028c0) Stream removed, broadcasting: 1 I0525 23:50:52.233063 7 log.go:172] (0xc002b8d130) (0xc002ae8320) Stream removed, broadcasting: 3 I0525 23:50:52.233074 7 log.go:172] (0xc002b8d130) (0xc002a02a00) Stream removed, broadcasting: 5 May 25 23:50:52.233: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:50:52.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0525 23:50:52.233345 7 log.go:172] (0xc002b8d130) Go away received STEP: Destroying namespace "e2e-kubelet-etc-hosts-9008" for this suite. • [SLOW TEST:11.293 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":38,"skipped":844,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:50:52.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:50:56.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6342" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":39,"skipped":871,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:50:56.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 23:50:56.500: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 25 23:50:58.659: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:50:59.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-383" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":40,"skipped":879,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:50:59.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:51:17.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7354" for this suite. • [SLOW TEST:17.539 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":41,"skipped":892,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:51:17.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 25 23:51:17.464: INFO: Pod name pod-release: Found 0 pods out of 1 May 25 23:51:22.468: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:51:22.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5834" for this suite. • [SLOW TEST:5.308 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":42,"skipped":897,"failed":0} [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:51:22.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 23:51:22.768: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 25 23:51:22.839: INFO: Number of nodes with available pods: 0 May 25 23:51:22.839: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 25 23:51:22.906: INFO: Number of nodes with available pods: 0 May 25 23:51:22.906: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:23.910: INFO: Number of nodes with available pods: 0 May 25 23:51:23.910: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:24.908: INFO: Number of nodes with available pods: 0 May 25 23:51:24.908: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:25.932: INFO: Number of nodes with available pods: 0 May 25 23:51:25.932: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:26.910: INFO: Number of nodes with available pods: 1 May 25 23:51:26.910: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 25 23:51:27.013: INFO: Number of nodes with available pods: 1 May 25 23:51:27.013: INFO: Number of running nodes: 0, number of available pods: 1 May 25 23:51:28.016: INFO: Number of nodes with available pods: 0 May 25 23:51:28.016: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 25 23:51:28.052: INFO: Number of nodes with available pods: 0 May 25 23:51:28.052: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:29.055: INFO: Number of nodes with available pods: 0 May 25 23:51:29.055: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:30.133: INFO: Number of nodes with available pods: 0 May 25 23:51:30.133: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:31.055: INFO: Number of nodes with available pods: 0 May 25 23:51:31.055: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:32.064: INFO: Number of nodes with available pods: 0 May 25 23:51:32.064: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:33.056: INFO: Number of nodes with available pods: 0 May 25 23:51:33.056: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:34.055: INFO: Number of nodes with available pods: 0 May 25 23:51:34.056: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:35.139: INFO: Number of nodes with available pods: 0 May 25 23:51:35.139: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:36.079: INFO: Number of nodes with available pods: 0 May 25 23:51:36.079: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:37.302: INFO: Number of nodes with available pods: 0 May 25 23:51:37.302: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:38.057: INFO: Number of nodes with available pods: 0 May 25 23:51:38.057: INFO: Node latest-worker is running more than one daemon pod May 25 23:51:39.068: INFO: Number of nodes with available pods: 1 May 25 23:51:39.068: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2451, will wait for the garbage collector to delete the pods May 25 23:51:39.134: INFO: Deleting DaemonSet.extensions daemon-set took: 6.68142ms May 25 23:51:39.434: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265958ms May 25 23:51:43.237: INFO: Number of nodes with available pods: 0 May 25 23:51:43.237: INFO: Number of running nodes: 0, number of available pods: 0 May 25 23:51:43.257: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2451/daemonsets","resourceVersion":"7675703"},"items":null} May 25 23:51:43.260: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2451/pods","resourceVersion":"7675703"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:51:43.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2451" for this suite. • [SLOW TEST:20.672 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":43,"skipped":897,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:51:43.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:52:01.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-489" for this suite. • [SLOW TEST:18.152 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":44,"skipped":943,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:52:01.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 23:52:01.914: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:52:08.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3641" for this suite. • [SLOW TEST:6.776 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":45,"skipped":962,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:52:08.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 23:52:08.756: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 23:52:10.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047528, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047528, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047528, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047528, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 23:52:12.770: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047528, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047528, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047528, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726047528, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 23:52:15.821: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:52:15.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7958" for this suite. STEP: Destroying namespace "webhook-7958-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.819 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":46,"skipped":981,"failed":0} [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:52:16.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:52:16.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5233" for this suite. STEP: Destroying namespace "nspatchtest-5d4010bb-8045-43a1-a1c4-16b748937fe3-328" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":47,"skipped":981,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:52:16.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1512.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1512.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 23:52:22.828: INFO: DNS probes using dns-test-2a24d110-e4cf-4699-8405-71480c5ab7b8 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1512.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1512.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 23:52:31.176: INFO: File wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local from pod dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 23:52:31.180: INFO: File jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local from pod dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 23:52:31.180: INFO: Lookups using dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 failed for: [wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local] May 25 23:52:36.185: INFO: File wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local from pod dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 23:52:36.189: INFO: File jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local from pod dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 23:52:36.189: INFO: Lookups using dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 failed for: [wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local] May 25 23:52:41.185: INFO: File wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local from pod dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 23:52:41.189: INFO: File jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local from pod dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 23:52:41.189: INFO: Lookups using dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 failed for: [wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local] May 25 23:52:46.185: INFO: File wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local from pod dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 23:52:46.189: INFO: File jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local from pod dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 23:52:46.189: INFO: Lookups using dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 failed for: [wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local] May 25 23:52:51.187: INFO: File wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local from pod dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 23:52:51.191: INFO: File jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local from pod dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 23:52:51.191: INFO: Lookups using dns-1512/dns-test-77605132-1830-4dca-b9d0-a05931167c24 failed for: [wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local] May 25 23:52:56.189: INFO: DNS probes using dns-test-77605132-1830-4dca-b9d0-a05931167c24 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1512.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1512.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1512.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1512.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 23:53:03.142: INFO: DNS probes using dns-test-9705a11b-60c4-4546-874c-8530861ff6bf succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:53:03.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1512" for this suite. • [SLOW TEST:47.010 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":48,"skipped":1011,"failed":0} [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:53:03.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 25 23:53:04.280: INFO: created pod pod-service-account-defaultsa May 25 23:53:04.280: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 25 23:53:04.317: INFO: created pod pod-service-account-mountsa May 25 23:53:04.317: INFO: pod pod-service-account-mountsa service account token volume mount: true May 25 23:53:04.475: INFO: created pod pod-service-account-nomountsa May 25 23:53:04.475: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 25 23:53:04.520: INFO: created pod pod-service-account-defaultsa-mountspec May 25 23:53:04.520: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 25 23:53:04.667: INFO: created pod pod-service-account-mountsa-mountspec May 25 23:53:04.667: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 25 23:53:04.715: INFO: created pod pod-service-account-nomountsa-mountspec May 25 23:53:04.715: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 25 23:53:04.754: INFO: created pod pod-service-account-defaultsa-nomountspec May 25 23:53:04.754: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 25 23:53:04.810: INFO: created pod pod-service-account-mountsa-nomountspec May 25 23:53:04.811: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 25 23:53:04.839: INFO: created pod pod-service-account-nomountsa-nomountspec May 25 23:53:04.839: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:53:04.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3645" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":49,"skipped":1011,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:53:04.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 23:53:05.128: INFO: Creating deployment "webserver-deployment" May 25 23:53:05.159: INFO: Waiting for observed generation 1 May 25 23:53:07.357: INFO: Waiting for all required pods to come up May 25 23:53:07.396: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 25 23:53:27.743: INFO: Waiting for deployment "webserver-deployment" to complete May 25 23:53:27.808: INFO: Updating deployment "webserver-deployment" with a non-existent image May 25 23:53:27.814: INFO: Updating deployment webserver-deployment May 25 23:53:27.814: INFO: Waiting for observed generation 2 May 25 23:53:31.190: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 25 23:53:32.116: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 25 23:53:32.652: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 25 23:53:33.589: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 25 23:53:33.589: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 25 23:53:33.612: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 25 23:53:33.774: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 25 23:53:33.774: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 25 23:53:33.839: INFO: Updating deployment webserver-deployment May 25 23:53:33.839: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 25 23:53:34.229: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 25 23:53:36.980: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 25 23:53:37.338: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6734 /apis/apps/v1/namespaces/deployment-6734/deployments/webserver-deployment 34ca43a8-f1ff-4e0b-ba7f-031354639fda 7676756 3 2020-05-25 23:53:05 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-25 23:53:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028196b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-25 23:53:34 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-25 23:53:34 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 25 23:53:37.494: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-6734 /apis/apps/v1/namespaces/deployment-6734/replicasets/webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 7676755 3 2020-05-25 23:53:27 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 34ca43a8-f1ff-4e0b-ba7f-031354639fda 0xc002819b57 0xc002819b58}] [] [{kube-controller-manager Update apps/v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"34ca43a8-f1ff-4e0b-ba7f-031354639fda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002819bd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 23:53:37.494: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 25 23:53:37.494: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-6734 /apis/apps/v1/namespaces/deployment-6734/replicasets/webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 7676750 3 2020-05-25 23:53:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 34ca43a8-f1ff-4e0b-ba7f-031354639fda 0xc002819c37 0xc002819c38}] [] [{kube-controller-manager Update apps/v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"34ca43a8-f1ff-4e0b-ba7f-031354639fda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002819ca8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 25 23:53:37.499: INFO: Pod "webserver-deployment-6676bcd6d4-2hbmk" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2hbmk webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-2hbmk 506d953a-70e1-4f8a-988d-329466d122d4 7676765 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc0029f81f7 0xc0029f81f8}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-25 23:53:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.552: INFO: Pod "webserver-deployment-6676bcd6d4-4d2w5" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4d2w5 webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-4d2w5 af77ce17-6441-4c4e-a7d7-919d5421a3c9 7676697 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc0029f83b7 0xc0029f83b8}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.553: INFO: Pod "webserver-deployment-6676bcd6d4-9bkgg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9bkgg webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-9bkgg f5507968-3849-4152-b650-14b258c79392 7676636 0 2020-05-25 23:53:28 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc0029f84f7 0xc0029f84f8}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-25 23:53:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.553: INFO: Pod "webserver-deployment-6676bcd6d4-ddc4c" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-ddc4c webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-ddc4c dd23def8-d2cf-4860-a111-c40afc32c69a 7676734 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc0029f86a7 0xc0029f86a8}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.553: INFO: Pod "webserver-deployment-6676bcd6d4-dfrtq" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dfrtq webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-dfrtq 832f2a46-1dab-4a1d-89d5-65402691cf40 7676785 0 2020-05-25 23:53:28 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc0029f8af7 0xc0029f8af8}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.86\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.86,StartTime:2020-05-25 23:53:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.86,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.553: INFO: Pod "webserver-deployment-6676bcd6d4-fxh7z" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fxh7z webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-fxh7z d226f2ec-5dc8-40b1-a7e7-1dfcbae7f071 7676723 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc0029f9a77 0xc0029f9a78}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.554: INFO: Pod "webserver-deployment-6676bcd6d4-gw7wd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-gw7wd webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-gw7wd 655ab6b7-f0f5-41fb-9f76-1e240a35d43c 7676768 0 2020-05-25 23:53:27 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc0029f9f27 0xc0029f9f28}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.78\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.78,StartTime:2020-05-25 23:53:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.554: INFO: Pod "webserver-deployment-6676bcd6d4-h6p6j" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-h6p6j webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-h6p6j 01f88746-1aac-4093-8b3f-6b1864ff78f8 7676635 0 2020-05-25 23:53:28 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc002ad6197 0xc002ad6198}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-25 23:53:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.554: INFO: Pod "webserver-deployment-6676bcd6d4-hr6fq" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hr6fq webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-hr6fq 1584cfaa-b802-4410-95c6-2cc3604ee2ff 7676724 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc002ad6347 0xc002ad6348}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.554: INFO: Pod "webserver-deployment-6676bcd6d4-nmjnx" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nmjnx webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-nmjnx 9f6e1980-6df2-453a-923c-dc1c46c786f2 7676757 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc002ad6487 0xc002ad6488}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-25 23:53:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.554: INFO: Pod "webserver-deployment-6676bcd6d4-r9442" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-r9442 webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-r9442 4776a4fc-5e34-4529-90a8-2152d9b0aad3 7676729 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc002ad6637 0xc002ad6638}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.554: INFO: Pod "webserver-deployment-6676bcd6d4-tjv9d" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tjv9d webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-tjv9d 6c6b6e76-00f1-42b8-98f3-004ff7712c75 7676725 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc002ad6847 0xc002ad6848}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.554: INFO: Pod "webserver-deployment-6676bcd6d4-z6w4f" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-z6w4f webserver-deployment-6676bcd6d4- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-6676bcd6d4-z6w4f f61b1cdb-a3db-486b-a1f3-6d7905252c8a 7676655 0 2020-05-25 23:53:29 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa 0xc002ad6e07 0xc002ad6e08}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f9987ed9-f81d-40f7-bfa0-e62f98d9f7fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-25 23:53:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.555: INFO: Pod "webserver-deployment-84855cf797-28n29" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-28n29 webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-28n29 eab129e2-ce46-4ee5-9f33-02c3beec45f2 7676799 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc002ad74a7 0xc002ad74a8}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-25 23:53:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.555: INFO: Pod "webserver-deployment-84855cf797-4st2w" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4st2w webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-4st2w b576e6d6-188e-45d2-a05a-3a5f641b86ab 7676567 0 2020-05-25 23:53:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc002ad7637 0xc002ad7638}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.83\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.83,StartTime:2020-05-25 23:53:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 23:53:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d124ad3a604e776e89aa0e243db549356efaed35930b68f9a83e7ef13cef992a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.555: INFO: Pod "webserver-deployment-84855cf797-5fl5j" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5fl5j webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-5fl5j c85273c5-cb48-49e0-87ce-5cbae30b73e9 7676562 0 2020-05-25 23:53:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc002ad77e7 0xc002ad77e8}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.81\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.81,StartTime:2020-05-25 23:53:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 23:53:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://197d10ec2f36b7724311cee453dbd8f16091d07678116dda205219065cdbd3f1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.556: INFO: Pod "webserver-deployment-84855cf797-6lc79" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6lc79 webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-6lc79 1f5a0b06-d485-4a77-afb9-dc5d62e1d7b3 7676708 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc002ad7997 0xc002ad7998}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.556: INFO: Pod "webserver-deployment-84855cf797-7mxmb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7mxmb webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-7mxmb 68068303-a802-4db3-ac6e-fddcf9d93c9e 7676774 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc002ad7ad7 0xc002ad7ad8}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-25 23:53:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.556: INFO: Pod "webserver-deployment-84855cf797-8lnlt" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8lnlt webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-8lnlt 520d097c-c190-43ee-ad5a-5d3dd8ba475f 7676557 0 2020-05-25 23:53:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc002ad7c67 0xc002ad7c68}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.84\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.84,StartTime:2020-05-25 23:53:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 23:53:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://645565cda66a68f3f591742a1528fe4b1607a1cda1877c9df5b9403355c19cf5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.556: INFO: Pod "webserver-deployment-84855cf797-b88cm" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-b88cm webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-b88cm a1db8065-fcf3-411c-83c6-d33fb6cff46c 7676552 0 2020-05-25 23:53:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc002ad7e37 0xc002ad7e38}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.82\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.82,StartTime:2020-05-25 23:53:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 23:53:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d1b3d4690847086b05818424830084ee21ea65e62e88befdc01ed3c58fd9a3b5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.557: INFO: Pod "webserver-deployment-84855cf797-bqwxl" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bqwxl webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-bqwxl 1e444d6d-6d86-484c-ad35-6d9a009ce175 7676733 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc002ad7fe7 0xc002ad7fe8}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.557: INFO: Pod "webserver-deployment-84855cf797-gsqzs" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-gsqzs webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-gsqzs 875c8ed0-a7d7-4ad4-9979-e7da1887791c 7676780 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc002f1c607 0xc002f1c608}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-25 23:53:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.557: INFO: Pod "webserver-deployment-84855cf797-jx292" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jx292 webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-jx292 54953383-c2a0-4735-8b12-a125e6091fb4 7676798 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc002f1cd37 0xc002f1cd38}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-25 23:53:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.558: INFO: Pod "webserver-deployment-84855cf797-ldbl5" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ldbl5 webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-ldbl5 18296a19-4605-4abd-bcf7-a198eb684eae 7676801 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc002f1d807 0xc002f1d808}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-25 23:53:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.558: INFO: Pod "webserver-deployment-84855cf797-mf24r" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-mf24r webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-mf24r d55e76af-ad25-41a6-9af9-10113209a23f 7676508 0 2020-05-25 23:53:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc002f1ddf7 0xc002f1ddf8}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.75\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.75,StartTime:2020-05-25 23:53:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 23:53:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://17870ec9f508e3c9c7361591cd80ea13fa90ebb1d3572dd07cd2a49fff1aabd3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.75,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.558: INFO: Pod "webserver-deployment-84855cf797-mjndn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-mjndn webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-mjndn 58e65402-d52c-4bbb-991a-f3fd08918151 7676807 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc003452357 0xc003452358}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-25 23:53:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.559: INFO: Pod "webserver-deployment-84855cf797-rgnz4" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rgnz4 webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-rgnz4 5911c057-6fc5-4988-98ab-5ca62799dc6f 7676494 0 2020-05-25 23:53:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc0034528c7 0xc0034528c8}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.74\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.74,StartTime:2020-05-25 23:53:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 23:53:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f8698cbeebc6db32c5fa78f0d08c422f0b8f5747ba05c1743f06ad27b767f150,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.74,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.559: INFO: Pod "webserver-deployment-84855cf797-rqsrn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rqsrn webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-rqsrn f86ac74b-cef8-45a0-b53c-6a2ae7e59861 7676794 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc003453297 0xc003453298}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-25 23:53:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.559: INFO: Pod "webserver-deployment-84855cf797-txzs9" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-txzs9 webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-txzs9 c645e011-5595-4540-9ddc-e79bee30bde7 7676502 0 2020-05-25 23:53:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc003453777 0xc003453778}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.73\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.73,StartTime:2020-05-25 23:53:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 23:53:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7c3dc20d4130be166e1b95596fb97410dd576775fd8b1fefbbb48d0d97db6258,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.560: INFO: Pod "webserver-deployment-84855cf797-vtvrt" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vtvrt webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-vtvrt c0c7f564-b913-4aa5-b778-f3abb051d42d 7676769 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc003453d47 0xc003453d48}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-25 23:53:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.560: INFO: Pod "webserver-deployment-84855cf797-w9fmx" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-w9fmx webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-w9fmx 788e08d4-3c58-475e-b417-6dee94a5790a 7676732 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc0022bc187 0xc0022bc188}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.560: INFO: Pod "webserver-deployment-84855cf797-wchqf" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wchqf webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-wchqf 9ddb2e53-84f7-4eba-a4b6-65915b51e584 7676731 0 2020-05-25 23:53:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc0022bc807 0xc0022bc808}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 23:53:37.560: INFO: Pod "webserver-deployment-84855cf797-xkhlh" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xkhlh webserver-deployment-84855cf797- deployment-6734 /api/v1/namespaces/deployment-6734/pods/webserver-deployment-84855cf797-xkhlh f40ba579-3c4e-4101-8a67-098f80983750 7676575 0 2020-05-25 23:53:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 48445b28-6a24-45d9-80b1-1419a109cf0d 0xc0022bd1d7 0xc0022bd1d8}] [] [{kube-controller-manager Update v1 2020-05-25 23:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48445b28-6a24-45d9-80b1-1419a109cf0d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-25 23:53:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.85\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pclgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pclgp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pclgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 23:53:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.85,StartTime:2020-05-25 23:53:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 23:53:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://75fce70f0b5dbe000b2d33099e34a6171438380ca5549903e7d6351d193c5010,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:53:37.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6734" for this suite. • [SLOW TEST:34.012 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":50,"skipped":1019,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:53:38.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-28n8 STEP: Creating a pod to test atomic-volume-subpath May 25 23:53:41.687: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-28n8" in namespace "subpath-8819" to be "Succeeded or Failed" May 25 23:53:42.278: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Pending", Reason="", readiness=false. Elapsed: 590.401579ms May 25 23:53:44.524: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83618355s May 25 23:53:47.095: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.407412958s May 25 23:53:49.133: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.445635245s May 25 23:53:51.375: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.687872087s May 25 23:53:53.431: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.743601281s May 25 23:53:55.493: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.805875754s May 25 23:53:57.534: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Running", Reason="", readiness=true. Elapsed: 15.846332319s May 25 23:53:59.655: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Running", Reason="", readiness=true. Elapsed: 17.967681505s May 25 23:54:01.693: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Running", Reason="", readiness=true. Elapsed: 20.00543418s May 25 23:54:03.834: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Running", Reason="", readiness=true. Elapsed: 22.146899943s May 25 23:54:05.839: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Running", Reason="", readiness=true. Elapsed: 24.15140195s May 25 23:54:07.842: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Running", Reason="", readiness=true. Elapsed: 26.154101358s May 25 23:54:09.846: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Running", Reason="", readiness=true. Elapsed: 28.158741631s May 25 23:54:11.850: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Running", Reason="", readiness=true. Elapsed: 30.162845867s May 25 23:54:13.854: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Running", Reason="", readiness=true. Elapsed: 32.166611119s May 25 23:54:15.858: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Running", Reason="", readiness=true. Elapsed: 34.171029704s May 25 23:54:17.865: INFO: Pod "pod-subpath-test-downwardapi-28n8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.177980796s STEP: Saw pod success May 25 23:54:17.865: INFO: Pod "pod-subpath-test-downwardapi-28n8" satisfied condition "Succeeded or Failed" May 25 23:54:17.868: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-28n8 container test-container-subpath-downwardapi-28n8: STEP: delete the pod May 25 23:54:17.939: INFO: Waiting for pod pod-subpath-test-downwardapi-28n8 to disappear May 25 23:54:17.953: INFO: Pod pod-subpath-test-downwardapi-28n8 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-28n8 May 25 23:54:17.953: INFO: Deleting pod "pod-subpath-test-downwardapi-28n8" in namespace "subpath-8819" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:54:17.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8819" for this suite. • [SLOW TEST:38.998 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":51,"skipped":1048,"failed":0} SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:54:17.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:55:18.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3999" for this suite. • [SLOW TEST:60.133 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":52,"skipped":1050,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:55:18.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 25 23:55:18.287: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5063 /api/v1/namespaces/watch-5063/configmaps/e2e-watch-test-resource-version 4ecfd084-842f-4b62-851a-158b52f43ad4 7677374 0 2020-05-25 23:55:18 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-25 23:55:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 23:55:18.287: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5063 /api/v1/namespaces/watch-5063/configmaps/e2e-watch-test-resource-version 4ecfd084-842f-4b62-851a-158b52f43ad4 7677375 0 2020-05-25 23:55:18 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-25 23:55:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:55:18.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5063" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":53,"skipped":1056,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:55:18.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 23:55:18.381: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:55:22.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3741" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":54,"skipped":1083,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:55:22.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 25 23:55:22.635: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 25 23:55:22.687: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:22.713: INFO: Number of nodes with available pods: 0 May 25 23:55:22.713: INFO: Node latest-worker is running more than one daemon pod May 25 23:55:23.740: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:23.744: INFO: Number of nodes with available pods: 0 May 25 23:55:23.744: INFO: Node latest-worker is running more than one daemon pod May 25 23:55:25.124: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:25.128: INFO: Number of nodes with available pods: 0 May 25 23:55:25.128: INFO: Node latest-worker is running more than one daemon pod May 25 23:55:25.758: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:25.763: INFO: Number of nodes with available pods: 0 May 25 23:55:25.763: INFO: Node latest-worker is running more than one daemon pod May 25 23:55:26.719: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:26.723: INFO: Number of nodes with available pods: 0 May 25 23:55:26.723: INFO: Node latest-worker is running more than one daemon pod May 25 23:55:27.730: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:27.734: INFO: Number of nodes with available pods: 2 May 25 23:55:27.734: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 25 23:55:27.839: INFO: Wrong image for pod: daemon-set-7w52m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:27.839: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:27.882: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:28.887: INFO: Wrong image for pod: daemon-set-7w52m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:28.887: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:28.892: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:29.887: INFO: Wrong image for pod: daemon-set-7w52m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:29.887: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:29.892: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:30.895: INFO: Wrong image for pod: daemon-set-7w52m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:30.895: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:30.900: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:31.887: INFO: Wrong image for pod: daemon-set-7w52m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:31.887: INFO: Pod daemon-set-7w52m is not available May 25 23:55:31.887: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:31.892: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:32.887: INFO: Wrong image for pod: daemon-set-7w52m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:32.887: INFO: Pod daemon-set-7w52m is not available May 25 23:55:32.887: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:32.891: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:33.887: INFO: Wrong image for pod: daemon-set-7w52m. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:33.887: INFO: Pod daemon-set-7w52m is not available May 25 23:55:33.887: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:33.891: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:34.893: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:34.893: INFO: Pod daemon-set-q9tvt is not available May 25 23:55:34.962: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:35.887: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:35.887: INFO: Pod daemon-set-q9tvt is not available May 25 23:55:35.891: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:36.973: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:36.974: INFO: Pod daemon-set-q9tvt is not available May 25 23:55:36.978: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:37.886: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:37.886: INFO: Pod daemon-set-q9tvt is not available May 25 23:55:37.890: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:38.998: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:39.001: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:39.887: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:39.892: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:40.889: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:40.889: INFO: Pod daemon-set-bwffq is not available May 25 23:55:40.894: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:41.888: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:41.888: INFO: Pod daemon-set-bwffq is not available May 25 23:55:41.892: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:42.888: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:42.888: INFO: Pod daemon-set-bwffq is not available May 25 23:55:42.892: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:43.887: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:43.887: INFO: Pod daemon-set-bwffq is not available May 25 23:55:43.891: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:44.887: INFO: Wrong image for pod: daemon-set-bwffq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 25 23:55:44.887: INFO: Pod daemon-set-bwffq is not available May 25 23:55:44.891: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:45.887: INFO: Pod daemon-set-fppnc is not available May 25 23:55:45.891: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 25 23:55:45.896: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:45.899: INFO: Number of nodes with available pods: 1 May 25 23:55:45.899: INFO: Node latest-worker2 is running more than one daemon pod May 25 23:55:46.905: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:46.909: INFO: Number of nodes with available pods: 1 May 25 23:55:46.909: INFO: Node latest-worker2 is running more than one daemon pod May 25 23:55:47.905: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:47.909: INFO: Number of nodes with available pods: 1 May 25 23:55:47.910: INFO: Node latest-worker2 is running more than one daemon pod May 25 23:55:48.905: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 23:55:48.910: INFO: Number of nodes with available pods: 2 May 25 23:55:48.910: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-724, will wait for the garbage collector to delete the pods May 25 23:55:48.984: INFO: Deleting DaemonSet.extensions daemon-set took: 6.86477ms May 25 23:55:49.384: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.287405ms May 25 23:55:55.308: INFO: Number of nodes with available pods: 0 May 25 23:55:55.308: INFO: Number of running nodes: 0, number of available pods: 0 May 25 23:55:55.310: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-724/daemonsets","resourceVersion":"7677600"},"items":null} May 25 23:55:55.312: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-724/pods","resourceVersion":"7677600"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:55:55.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-724" for this suite. • [SLOW TEST:32.786 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":55,"skipped":1083,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:55:55.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8496/configmap-test-3cb74996-5fd9-492e-be43-f80606b9969b STEP: Creating a pod to test consume configMaps May 25 23:55:55.448: INFO: Waiting up to 5m0s for pod "pod-configmaps-891ba692-117e-4801-a27c-665501137960" in namespace "configmap-8496" to be "Succeeded or Failed" May 25 23:55:55.463: INFO: Pod "pod-configmaps-891ba692-117e-4801-a27c-665501137960": Phase="Pending", Reason="", readiness=false. Elapsed: 14.299671ms May 25 23:55:57.467: INFO: Pod "pod-configmaps-891ba692-117e-4801-a27c-665501137960": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018603553s May 25 23:55:59.470: INFO: Pod "pod-configmaps-891ba692-117e-4801-a27c-665501137960": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021782311s STEP: Saw pod success May 25 23:55:59.470: INFO: Pod "pod-configmaps-891ba692-117e-4801-a27c-665501137960" satisfied condition "Succeeded or Failed" May 25 23:55:59.472: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-891ba692-117e-4801-a27c-665501137960 container env-test: STEP: delete the pod May 25 23:55:59.510: INFO: Waiting for pod pod-configmaps-891ba692-117e-4801-a27c-665501137960 to disappear May 25 23:55:59.516: INFO: Pod pod-configmaps-891ba692-117e-4801-a27c-665501137960 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:55:59.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8496" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":56,"skipped":1091,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:55:59.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2789.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2789.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2789.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2789.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2789.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2789.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 23:56:05.710: INFO: DNS probes using dns-2789/dns-test-b0925f46-c4df-4c6f-a8ce-786a77e9ef57 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:56:05.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2789" for this suite. • [SLOW TEST:6.266 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":57,"skipped":1099,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:56:05.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:56:12.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9192" for this suite. • [SLOW TEST:6.817 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":58,"skipped":1108,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:56:12.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 25 23:56:21.318: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 23:56:21.399: INFO: Pod pod-with-poststart-exec-hook still exists May 25 23:56:23.399: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 23:56:23.405: INFO: Pod pod-with-poststart-exec-hook still exists May 25 23:56:25.399: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 23:56:25.404: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:56:25.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-674" for this suite. • [SLOW TEST:12.804 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":59,"skipped":1126,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:56:25.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 25 23:56:25.568: INFO: Waiting up to 5m0s for pod "pod-16c5f3da-d768-43e0-b774-b712c2d65dfa" in namespace "emptydir-820" to be "Succeeded or Failed" May 25 23:56:25.595: INFO: Pod "pod-16c5f3da-d768-43e0-b774-b712c2d65dfa": Phase="Pending", Reason="", readiness=false. Elapsed: 27.775424ms May 25 23:56:27.600: INFO: Pod "pod-16c5f3da-d768-43e0-b774-b712c2d65dfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031893269s May 25 23:56:29.605: INFO: Pod "pod-16c5f3da-d768-43e0-b774-b712c2d65dfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03715598s STEP: Saw pod success May 25 23:56:29.605: INFO: Pod "pod-16c5f3da-d768-43e0-b774-b712c2d65dfa" satisfied condition "Succeeded or Failed" May 25 23:56:29.608: INFO: Trying to get logs from node latest-worker2 pod pod-16c5f3da-d768-43e0-b774-b712c2d65dfa container test-container: STEP: delete the pod May 25 23:56:29.655: INFO: Waiting for pod pod-16c5f3da-d768-43e0-b774-b712c2d65dfa to disappear May 25 23:56:29.661: INFO: Pod pod-16c5f3da-d768-43e0-b774-b712c2d65dfa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:56:29.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-820" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":60,"skipped":1157,"failed":0} SSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:56:29.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7139 May 25 23:56:33.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7139 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 25 23:56:36.861: INFO: stderr: "I0525 23:56:36.744799 1024 log.go:172] (0xc0000e8fd0) (0xc000855ea0) Create stream\nI0525 23:56:36.744846 1024 log.go:172] (0xc0000e8fd0) (0xc000855ea0) Stream added, broadcasting: 1\nI0525 23:56:36.747759 1024 log.go:172] (0xc0000e8fd0) Reply frame received for 1\nI0525 23:56:36.747804 1024 log.go:172] (0xc0000e8fd0) (0xc0006f0500) Create stream\nI0525 23:56:36.747818 1024 log.go:172] (0xc0000e8fd0) (0xc0006f0500) Stream added, broadcasting: 3\nI0525 23:56:36.748905 1024 log.go:172] (0xc0000e8fd0) Reply frame received for 3\nI0525 23:56:36.748946 1024 log.go:172] (0xc0000e8fd0) (0xc0006aa500) Create stream\nI0525 23:56:36.748965 1024 log.go:172] (0xc0000e8fd0) (0xc0006aa500) Stream added, broadcasting: 5\nI0525 23:56:36.750494 1024 log.go:172] (0xc0000e8fd0) Reply frame received for 5\nI0525 23:56:36.844902 1024 log.go:172] (0xc0000e8fd0) Data frame received for 5\nI0525 23:56:36.844925 1024 log.go:172] (0xc0006aa500) (5) Data frame handling\nI0525 23:56:36.844939 1024 log.go:172] (0xc0006aa500) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0525 23:56:36.850816 1024 log.go:172] (0xc0000e8fd0) Data frame received for 3\nI0525 23:56:36.850834 1024 log.go:172] (0xc0006f0500) (3) Data frame handling\nI0525 23:56:36.850859 1024 log.go:172] (0xc0006f0500) (3) Data frame sent\nI0525 23:56:36.851728 1024 log.go:172] (0xc0000e8fd0) Data frame received for 3\nI0525 23:56:36.851748 1024 log.go:172] (0xc0006f0500) (3) Data frame handling\nI0525 23:56:36.851760 1024 log.go:172] (0xc0000e8fd0) Data frame received for 5\nI0525 23:56:36.851772 1024 log.go:172] (0xc0006aa500) (5) Data frame handling\nI0525 23:56:36.853869 1024 log.go:172] (0xc0000e8fd0) Data frame received for 1\nI0525 23:56:36.853906 1024 log.go:172] (0xc000855ea0) (1) Data frame handling\nI0525 23:56:36.853931 1024 log.go:172] (0xc000855ea0) (1) Data frame sent\nI0525 23:56:36.853950 1024 log.go:172] (0xc0000e8fd0) (0xc000855ea0) Stream removed, broadcasting: 1\nI0525 23:56:36.853974 1024 log.go:172] (0xc0000e8fd0) Go away received\nI0525 23:56:36.854367 1024 log.go:172] (0xc0000e8fd0) (0xc000855ea0) Stream removed, broadcasting: 1\nI0525 23:56:36.854388 1024 log.go:172] (0xc0000e8fd0) (0xc0006f0500) Stream removed, broadcasting: 3\nI0525 23:56:36.854399 1024 log.go:172] (0xc0000e8fd0) (0xc0006aa500) Stream removed, broadcasting: 5\n" May 25 23:56:36.861: INFO: stdout: "iptables" May 25 23:56:36.861: INFO: proxyMode: iptables May 25 23:56:36.866: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 23:56:36.891: INFO: Pod kube-proxy-mode-detector still exists May 25 23:56:38.892: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 23:56:38.896: INFO: Pod kube-proxy-mode-detector still exists May 25 23:56:40.892: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 23:56:40.896: INFO: Pod kube-proxy-mode-detector still exists May 25 23:56:42.892: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 23:56:42.896: INFO: Pod kube-proxy-mode-detector still exists May 25 23:56:44.892: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 23:56:44.925: INFO: Pod kube-proxy-mode-detector still exists May 25 23:56:46.892: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 23:56:46.896: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-7139 STEP: creating replication controller affinity-nodeport-timeout in namespace services-7139 I0525 23:56:46.972494 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-7139, replica count: 3 I0525 23:56:50.022985 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 23:56:53.023265 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 23:56:56.023580 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 23:56:56.035: INFO: Creating new exec pod May 25 23:57:01.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7139 execpod-affinitybgh6h -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 25 23:57:01.333: INFO: stderr: "I0525 23:57:01.194865 1058 log.go:172] (0xc000bacdc0) (0xc000a4e6e0) Create stream\nI0525 23:57:01.195482 1058 log.go:172] (0xc000bacdc0) (0xc000a4e6e0) Stream added, broadcasting: 1\nI0525 23:57:01.199405 1058 log.go:172] (0xc000bacdc0) Reply frame received for 1\nI0525 23:57:01.199456 1058 log.go:172] (0xc000bacdc0) (0xc000646a00) Create stream\nI0525 23:57:01.199492 1058 log.go:172] (0xc000bacdc0) (0xc000646a00) Stream added, broadcasting: 3\nI0525 23:57:01.200501 1058 log.go:172] (0xc000bacdc0) Reply frame received for 3\nI0525 23:57:01.200545 1058 log.go:172] (0xc000bacdc0) (0xc000646f00) Create stream\nI0525 23:57:01.200563 1058 log.go:172] (0xc000bacdc0) (0xc000646f00) Stream added, broadcasting: 5\nI0525 23:57:01.201646 1058 log.go:172] (0xc000bacdc0) Reply frame received for 5\nI0525 23:57:01.301348 1058 log.go:172] (0xc000bacdc0) Data frame received for 5\nI0525 23:57:01.301373 1058 log.go:172] (0xc000646f00) (5) Data frame handling\nI0525 23:57:01.301387 1058 log.go:172] (0xc000646f00) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0525 23:57:01.324893 1058 log.go:172] (0xc000bacdc0) Data frame received for 5\nI0525 23:57:01.324922 1058 log.go:172] (0xc000646f00) (5) Data frame handling\nI0525 23:57:01.324939 1058 log.go:172] (0xc000646f00) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0525 23:57:01.325239 1058 log.go:172] (0xc000bacdc0) Data frame received for 3\nI0525 23:57:01.325266 1058 log.go:172] (0xc000646a00) (3) Data frame handling\nI0525 23:57:01.325738 1058 log.go:172] (0xc000bacdc0) Data frame received for 5\nI0525 23:57:01.325768 1058 log.go:172] (0xc000646f00) (5) Data frame handling\nI0525 23:57:01.327378 1058 log.go:172] (0xc000bacdc0) Data frame received for 1\nI0525 23:57:01.327401 1058 log.go:172] (0xc000a4e6e0) (1) Data frame handling\nI0525 23:57:01.327419 1058 log.go:172] (0xc000a4e6e0) (1) Data frame sent\nI0525 23:57:01.327447 1058 log.go:172] (0xc000bacdc0) (0xc000a4e6e0) Stream removed, broadcasting: 1\nI0525 23:57:01.327472 1058 log.go:172] (0xc000bacdc0) Go away received\nI0525 23:57:01.327803 1058 log.go:172] (0xc000bacdc0) (0xc000a4e6e0) Stream removed, broadcasting: 1\nI0525 23:57:01.327823 1058 log.go:172] (0xc000bacdc0) (0xc000646a00) Stream removed, broadcasting: 3\nI0525 23:57:01.327832 1058 log.go:172] (0xc000bacdc0) (0xc000646f00) Stream removed, broadcasting: 5\n" May 25 23:57:01.333: INFO: stdout: "" May 25 23:57:01.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7139 execpod-affinitybgh6h -- /bin/sh -x -c nc -zv -t -w 2 10.111.147.219 80' May 25 23:57:01.525: INFO: stderr: "I0525 23:57:01.462498 1079 log.go:172] (0xc0006d4210) (0xc0006480a0) Create stream\nI0525 23:57:01.462548 1079 log.go:172] (0xc0006d4210) (0xc0006480a0) Stream added, broadcasting: 1\nI0525 23:57:01.464518 1079 log.go:172] (0xc0006d4210) Reply frame received for 1\nI0525 23:57:01.464583 1079 log.go:172] (0xc0006d4210) (0xc0002379a0) Create stream\nI0525 23:57:01.464612 1079 log.go:172] (0xc0006d4210) (0xc0002379a0) Stream added, broadcasting: 3\nI0525 23:57:01.465747 1079 log.go:172] (0xc0006d4210) Reply frame received for 3\nI0525 23:57:01.465796 1079 log.go:172] (0xc0006d4210) (0xc00069a5a0) Create stream\nI0525 23:57:01.465809 1079 log.go:172] (0xc0006d4210) (0xc00069a5a0) Stream added, broadcasting: 5\nI0525 23:57:01.466618 1079 log.go:172] (0xc0006d4210) Reply frame received for 5\nI0525 23:57:01.518668 1079 log.go:172] (0xc0006d4210) Data frame received for 3\nI0525 23:57:01.518723 1079 log.go:172] (0xc0002379a0) (3) Data frame handling\nI0525 23:57:01.519072 1079 log.go:172] (0xc0006d4210) Data frame received for 5\nI0525 23:57:01.519176 1079 log.go:172] (0xc00069a5a0) (5) Data frame handling\nI0525 23:57:01.519228 1079 log.go:172] (0xc00069a5a0) (5) Data frame sent\nI0525 23:57:01.519255 1079 log.go:172] (0xc0006d4210) Data frame received for 5\nI0525 23:57:01.519274 1079 log.go:172] (0xc00069a5a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.147.219 80\nConnection to 10.111.147.219 80 port [tcp/http] succeeded!\nI0525 23:57:01.520961 1079 log.go:172] (0xc0006d4210) Data frame received for 1\nI0525 23:57:01.520979 1079 log.go:172] (0xc0006480a0) (1) Data frame handling\nI0525 23:57:01.521000 1079 log.go:172] (0xc0006480a0) (1) Data frame sent\nI0525 23:57:01.521014 1079 log.go:172] (0xc0006d4210) (0xc0006480a0) Stream removed, broadcasting: 1\nI0525 23:57:01.521057 1079 log.go:172] (0xc0006d4210) Go away received\nI0525 23:57:01.521484 1079 log.go:172] (0xc0006d4210) (0xc0006480a0) Stream removed, broadcasting: 1\nI0525 23:57:01.521500 1079 log.go:172] (0xc0006d4210) (0xc0002379a0) Stream removed, broadcasting: 3\nI0525 23:57:01.521508 1079 log.go:172] (0xc0006d4210) (0xc00069a5a0) Stream removed, broadcasting: 5\n" May 25 23:57:01.525: INFO: stdout: "" May 25 23:57:01.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7139 execpod-affinitybgh6h -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31484' May 25 23:57:01.751: INFO: stderr: "I0525 23:57:01.654105 1100 log.go:172] (0xc000a2d080) (0xc000ae85a0) Create stream\nI0525 23:57:01.654508 1100 log.go:172] (0xc000a2d080) (0xc000ae85a0) Stream added, broadcasting: 1\nI0525 23:57:01.669389 1100 log.go:172] (0xc000a2d080) Reply frame received for 1\nI0525 23:57:01.669539 1100 log.go:172] (0xc000a2d080) (0xc00060c5a0) Create stream\nI0525 23:57:01.669589 1100 log.go:172] (0xc000a2d080) (0xc00060c5a0) Stream added, broadcasting: 3\nI0525 23:57:01.671519 1100 log.go:172] (0xc000a2d080) Reply frame received for 3\nI0525 23:57:01.671586 1100 log.go:172] (0xc000a2d080) (0xc0005e0280) Create stream\nI0525 23:57:01.671606 1100 log.go:172] (0xc000a2d080) (0xc0005e0280) Stream added, broadcasting: 5\nI0525 23:57:01.674056 1100 log.go:172] (0xc000a2d080) Reply frame received for 5\nI0525 23:57:01.739405 1100 log.go:172] (0xc000a2d080) Data frame received for 5\nI0525 23:57:01.739445 1100 log.go:172] (0xc0005e0280) (5) Data frame handling\nI0525 23:57:01.739483 1100 log.go:172] (0xc0005e0280) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31484\nI0525 23:57:01.745329 1100 log.go:172] (0xc000a2d080) Data frame received for 5\nI0525 23:57:01.745583 1100 log.go:172] (0xc0005e0280) (5) Data frame handling\nI0525 23:57:01.745626 1100 log.go:172] (0xc0005e0280) (5) Data frame sent\nConnection to 172.17.0.13 31484 port [tcp/31484] succeeded!\nI0525 23:57:01.745743 1100 log.go:172] (0xc000a2d080) Data frame received for 5\nI0525 23:57:01.745780 1100 log.go:172] (0xc0005e0280) (5) Data frame handling\nI0525 23:57:01.745808 1100 log.go:172] (0xc000a2d080) Data frame received for 3\nI0525 23:57:01.745824 1100 log.go:172] (0xc00060c5a0) (3) Data frame handling\nI0525 23:57:01.747622 1100 log.go:172] (0xc000a2d080) Data frame received for 1\nI0525 23:57:01.747645 1100 log.go:172] (0xc000ae85a0) (1) Data frame handling\nI0525 23:57:01.747664 1100 log.go:172] (0xc000ae85a0) (1) Data frame sent\nI0525 23:57:01.747676 1100 log.go:172] (0xc000a2d080) (0xc000ae85a0) Stream removed, broadcasting: 1\nI0525 23:57:01.747738 1100 log.go:172] (0xc000a2d080) Go away received\nI0525 23:57:01.748041 1100 log.go:172] (0xc000a2d080) (0xc000ae85a0) Stream removed, broadcasting: 1\nI0525 23:57:01.748058 1100 log.go:172] (0xc000a2d080) (0xc00060c5a0) Stream removed, broadcasting: 3\nI0525 23:57:01.748068 1100 log.go:172] (0xc000a2d080) (0xc0005e0280) Stream removed, broadcasting: 5\n" May 25 23:57:01.751: INFO: stdout: "" May 25 23:57:01.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7139 execpod-affinitybgh6h -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31484' May 25 23:57:01.974: INFO: stderr: "I0525 23:57:01.886775 1119 log.go:172] (0xc00003ab00) (0xc0006d75e0) Create stream\nI0525 23:57:01.886849 1119 log.go:172] (0xc00003ab00) (0xc0006d75e0) Stream added, broadcasting: 1\nI0525 23:57:01.889720 1119 log.go:172] (0xc00003ab00) Reply frame received for 1\nI0525 23:57:01.889760 1119 log.go:172] (0xc00003ab00) (0xc0003ace60) Create stream\nI0525 23:57:01.889773 1119 log.go:172] (0xc00003ab00) (0xc0003ace60) Stream added, broadcasting: 3\nI0525 23:57:01.890755 1119 log.go:172] (0xc00003ab00) Reply frame received for 3\nI0525 23:57:01.890785 1119 log.go:172] (0xc00003ab00) (0xc00020a140) Create stream\nI0525 23:57:01.890795 1119 log.go:172] (0xc00003ab00) (0xc00020a140) Stream added, broadcasting: 5\nI0525 23:57:01.891640 1119 log.go:172] (0xc00003ab00) Reply frame received for 5\nI0525 23:57:01.965834 1119 log.go:172] (0xc00003ab00) Data frame received for 5\nI0525 23:57:01.965873 1119 log.go:172] (0xc00020a140) (5) Data frame handling\nI0525 23:57:01.965895 1119 log.go:172] (0xc00020a140) (5) Data frame sent\nI0525 23:57:01.965905 1119 log.go:172] (0xc00003ab00) Data frame received for 5\nI0525 23:57:01.965914 1119 log.go:172] (0xc00020a140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31484\nConnection to 172.17.0.12 31484 port [tcp/31484] succeeded!\nI0525 23:57:01.966039 1119 log.go:172] (0xc00020a140) (5) Data frame sent\nI0525 23:57:01.966130 1119 log.go:172] (0xc00003ab00) Data frame received for 3\nI0525 23:57:01.966192 1119 log.go:172] (0xc0003ace60) (3) Data frame handling\nI0525 23:57:01.966233 1119 log.go:172] (0xc00003ab00) Data frame received for 5\nI0525 23:57:01.966256 1119 log.go:172] (0xc00020a140) (5) Data frame handling\nI0525 23:57:01.967893 1119 log.go:172] (0xc00003ab00) Data frame received for 1\nI0525 23:57:01.967923 1119 log.go:172] (0xc0006d75e0) (1) Data frame handling\nI0525 23:57:01.967952 1119 log.go:172] (0xc0006d75e0) (1) Data frame sent\nI0525 23:57:01.967974 1119 log.go:172] (0xc00003ab00) (0xc0006d75e0) Stream removed, broadcasting: 1\nI0525 23:57:01.967999 1119 log.go:172] (0xc00003ab00) Go away received\nI0525 23:57:01.968421 1119 log.go:172] (0xc00003ab00) (0xc0006d75e0) Stream removed, broadcasting: 1\nI0525 23:57:01.968443 1119 log.go:172] (0xc00003ab00) (0xc0003ace60) Stream removed, broadcasting: 3\nI0525 23:57:01.968454 1119 log.go:172] (0xc00003ab00) (0xc00020a140) Stream removed, broadcasting: 5\n" May 25 23:57:01.974: INFO: stdout: "" May 25 23:57:01.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7139 execpod-affinitybgh6h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31484/ ; done' May 25 23:57:02.309: INFO: stderr: "I0525 23:57:02.120967 1141 log.go:172] (0xc00052e2c0) (0xc0004a4a00) Create stream\nI0525 23:57:02.121047 1141 log.go:172] (0xc00052e2c0) (0xc0004a4a00) Stream added, broadcasting: 1\nI0525 23:57:02.123332 1141 log.go:172] (0xc00052e2c0) Reply frame received for 1\nI0525 23:57:02.123397 1141 log.go:172] (0xc00052e2c0) (0xc0004a59a0) Create stream\nI0525 23:57:02.123426 1141 log.go:172] (0xc00052e2c0) (0xc0004a59a0) Stream added, broadcasting: 3\nI0525 23:57:02.124634 1141 log.go:172] (0xc00052e2c0) Reply frame received for 3\nI0525 23:57:02.124692 1141 log.go:172] (0xc00052e2c0) (0xc00066d220) Create stream\nI0525 23:57:02.124706 1141 log.go:172] (0xc00052e2c0) (0xc00066d220) Stream added, broadcasting: 5\nI0525 23:57:02.125881 1141 log.go:172] (0xc00052e2c0) Reply frame received for 5\nI0525 23:57:02.176492 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.176540 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.176556 1141 log.go:172] (0xc00066d220) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.176570 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.176595 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.176612 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.214449 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.214462 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.214469 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.215777 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.215823 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.215841 1141 log.go:172] (0xc00066d220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.215868 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.215882 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.215894 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.223192 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.223204 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.223209 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.224259 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.224276 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.224294 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.224301 1141 log.go:172] (0xc00066d220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.224331 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.224367 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.231001 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.231026 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.231045 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.231498 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.231510 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.231522 1141 log.go:172] (0xc00066d220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.231548 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.231576 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.231594 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.235782 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.235795 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.235803 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.236448 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.236482 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.236505 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.236528 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.236538 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.236555 1141 log.go:172] (0xc00066d220) (5) Data frame sent\nI0525 23:57:02.236570 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.236580 1141 log.go:172] (0xc00066d220) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.236617 1141 log.go:172] (0xc00066d220) (5) Data frame sent\nI0525 23:57:02.240482 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.240511 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.240545 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.240828 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.240866 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.240877 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.240892 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.240900 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.240909 1141 log.go:172] (0xc00066d220) (5) Data frame sent\nI0525 23:57:02.240917 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.240925 1141 log.go:172] (0xc00066d220) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.240955 1141 log.go:172] (0xc00066d220) (5) Data frame sent\nI0525 23:57:02.246572 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.246588 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.246610 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.247035 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.247056 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.247080 1141 log.go:172] (0xc00066d220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.247335 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.247368 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.247387 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.250931 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.250944 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.250950 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.251340 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.251369 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.251393 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.251422 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.251460 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.251486 1141 log.go:172] (0xc00066d220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.255202 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.255215 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.255222 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.255542 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.255567 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.255620 1141 log.go:172] (0xc00066d220) (5) Data frame sent\nI0525 23:57:02.255639 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.255647 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.255654 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.261525 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.261550 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.261562 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.261969 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.261986 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.262006 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.262031 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.262062 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.262077 1141 log.go:172] (0xc00066d220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.266250 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.266266 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.266274 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.266650 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.266670 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.266677 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.266687 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.266694 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.266705 1141 log.go:172] (0xc00066d220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.273070 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.273085 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.273093 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.273950 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.273979 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.274010 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.274028 1141 log.go:172] (0xc00066d220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.274048 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.274062 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.279011 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.279033 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.279050 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.279403 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.279415 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.279423 1141 log.go:172] (0xc00066d220) (5) Data frame sent\nI0525 23:57:02.279430 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/I0525 23:57:02.279438 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.279481 1141 log.go:172] (0xc00066d220) (5) Data frame sent\nI0525 23:57:02.279500 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.279514 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.279527 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\n\nI0525 23:57:02.283293 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.283328 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.283358 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.284177 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.284189 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.284197 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.284217 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.284238 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.284271 1141 log.go:172] (0xc00066d220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.288713 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.288733 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.288754 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.289361 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.289382 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.289396 1141 log.go:172] (0xc00066d220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.289415 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.289428 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.289440 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.293834 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.293854 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.293865 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.294410 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.294434 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.294448 1141 log.go:172] (0xc00066d220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.294491 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.294525 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.294548 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.299001 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.299022 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.299040 1141 log.go:172] (0xc0004a59a0) (3) Data frame sent\nI0525 23:57:02.299715 1141 log.go:172] (0xc00052e2c0) Data frame received for 5\nI0525 23:57:02.299735 1141 log.go:172] (0xc00066d220) (5) Data frame handling\nI0525 23:57:02.299768 1141 log.go:172] (0xc00052e2c0) Data frame received for 3\nI0525 23:57:02.299798 1141 log.go:172] (0xc0004a59a0) (3) Data frame handling\nI0525 23:57:02.303111 1141 log.go:172] (0xc00052e2c0) Data frame received for 1\nI0525 23:57:02.303142 1141 log.go:172] (0xc0004a4a00) (1) Data frame handling\nI0525 23:57:02.303171 1141 log.go:172] (0xc0004a4a00) (1) Data frame sent\nI0525 23:57:02.303190 1141 log.go:172] (0xc00052e2c0) (0xc0004a4a00) Stream removed, broadcasting: 1\nI0525 23:57:02.303213 1141 log.go:172] (0xc00052e2c0) Go away received\nI0525 23:57:02.303798 1141 log.go:172] (0xc00052e2c0) (0xc0004a4a00) Stream removed, broadcasting: 1\nI0525 23:57:02.303819 1141 log.go:172] (0xc00052e2c0) (0xc0004a59a0) Stream removed, broadcasting: 3\nI0525 23:57:02.303830 1141 log.go:172] (0xc00052e2c0) (0xc00066d220) Stream removed, broadcasting: 5\n" May 25 23:57:02.310: INFO: stdout: "\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68\naffinity-nodeport-timeout-m2l68" May 25 23:57:02.310: INFO: Received response from host: May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Received response from host: affinity-nodeport-timeout-m2l68 May 25 23:57:02.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7139 execpod-affinitybgh6h -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31484/' May 25 23:57:02.517: INFO: stderr: "I0525 23:57:02.446380 1163 log.go:172] (0xc000b391e0) (0xc000862000) Create stream\nI0525 23:57:02.446454 1163 log.go:172] (0xc000b391e0) (0xc000862000) Stream added, broadcasting: 1\nI0525 23:57:02.450874 1163 log.go:172] (0xc000b391e0) Reply frame received for 1\nI0525 23:57:02.450915 1163 log.go:172] (0xc000b391e0) (0xc00074f040) Create stream\nI0525 23:57:02.450925 1163 log.go:172] (0xc000b391e0) (0xc00074f040) Stream added, broadcasting: 3\nI0525 23:57:02.451640 1163 log.go:172] (0xc000b391e0) Reply frame received for 3\nI0525 23:57:02.451670 1163 log.go:172] (0xc000b391e0) (0xc000674320) Create stream\nI0525 23:57:02.451677 1163 log.go:172] (0xc000b391e0) (0xc000674320) Stream added, broadcasting: 5\nI0525 23:57:02.452369 1163 log.go:172] (0xc000b391e0) Reply frame received for 5\nI0525 23:57:02.501092 1163 log.go:172] (0xc000b391e0) Data frame received for 5\nI0525 23:57:02.501331 1163 log.go:172] (0xc000674320) (5) Data frame handling\nI0525 23:57:02.501356 1163 log.go:172] (0xc000674320) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:02.506960 1163 log.go:172] (0xc000b391e0) Data frame received for 3\nI0525 23:57:02.506986 1163 log.go:172] (0xc00074f040) (3) Data frame handling\nI0525 23:57:02.507007 1163 log.go:172] (0xc00074f040) (3) Data frame sent\nI0525 23:57:02.507593 1163 log.go:172] (0xc000b391e0) Data frame received for 5\nI0525 23:57:02.507639 1163 log.go:172] (0xc000674320) (5) Data frame handling\nI0525 23:57:02.507674 1163 log.go:172] (0xc000b391e0) Data frame received for 3\nI0525 23:57:02.507697 1163 log.go:172] (0xc00074f040) (3) Data frame handling\nI0525 23:57:02.509644 1163 log.go:172] (0xc000b391e0) Data frame received for 1\nI0525 23:57:02.509673 1163 log.go:172] (0xc000862000) (1) Data frame handling\nI0525 23:57:02.509695 1163 log.go:172] (0xc000862000) (1) Data frame sent\nI0525 23:57:02.509733 1163 log.go:172] (0xc000b391e0) (0xc000862000) Stream removed, broadcasting: 1\nI0525 23:57:02.510099 1163 log.go:172] (0xc000b391e0) (0xc000862000) Stream removed, broadcasting: 1\nI0525 23:57:02.510122 1163 log.go:172] (0xc000b391e0) (0xc00074f040) Stream removed, broadcasting: 3\nI0525 23:57:02.510143 1163 log.go:172] (0xc000b391e0) (0xc000674320) Stream removed, broadcasting: 5\n" May 25 23:57:02.518: INFO: stdout: "affinity-nodeport-timeout-m2l68" May 25 23:57:17.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7139 execpod-affinitybgh6h -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31484/' May 25 23:57:17.715: INFO: stderr: "I0525 23:57:17.640350 1183 log.go:172] (0xc000adb340) (0xc00012b040) Create stream\nI0525 23:57:17.640394 1183 log.go:172] (0xc000adb340) (0xc00012b040) Stream added, broadcasting: 1\nI0525 23:57:17.642621 1183 log.go:172] (0xc000adb340) Reply frame received for 1\nI0525 23:57:17.642650 1183 log.go:172] (0xc000adb340) (0xc0001ec1e0) Create stream\nI0525 23:57:17.642658 1183 log.go:172] (0xc000adb340) (0xc0001ec1e0) Stream added, broadcasting: 3\nI0525 23:57:17.643302 1183 log.go:172] (0xc000adb340) Reply frame received for 3\nI0525 23:57:17.643324 1183 log.go:172] (0xc000adb340) (0xc00012b5e0) Create stream\nI0525 23:57:17.643330 1183 log.go:172] (0xc000adb340) (0xc00012b5e0) Stream added, broadcasting: 5\nI0525 23:57:17.643995 1183 log.go:172] (0xc000adb340) Reply frame received for 5\nI0525 23:57:17.705684 1183 log.go:172] (0xc000adb340) Data frame received for 5\nI0525 23:57:17.705716 1183 log.go:172] (0xc00012b5e0) (5) Data frame handling\nI0525 23:57:17.705738 1183 log.go:172] (0xc00012b5e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:17.707737 1183 log.go:172] (0xc000adb340) Data frame received for 3\nI0525 23:57:17.707778 1183 log.go:172] (0xc0001ec1e0) (3) Data frame handling\nI0525 23:57:17.707806 1183 log.go:172] (0xc0001ec1e0) (3) Data frame sent\nI0525 23:57:17.708216 1183 log.go:172] (0xc000adb340) Data frame received for 5\nI0525 23:57:17.708248 1183 log.go:172] (0xc00012b5e0) (5) Data frame handling\nI0525 23:57:17.708299 1183 log.go:172] (0xc000adb340) Data frame received for 3\nI0525 23:57:17.708328 1183 log.go:172] (0xc0001ec1e0) (3) Data frame handling\nI0525 23:57:17.710059 1183 log.go:172] (0xc000adb340) Data frame received for 1\nI0525 23:57:17.710094 1183 log.go:172] (0xc00012b040) (1) Data frame handling\nI0525 23:57:17.710112 1183 log.go:172] (0xc00012b040) (1) Data frame sent\nI0525 23:57:17.710151 1183 log.go:172] (0xc000adb340) (0xc00012b040) Stream removed, broadcasting: 1\nI0525 23:57:17.710196 1183 log.go:172] (0xc000adb340) Go away received\nI0525 23:57:17.710790 1183 log.go:172] (0xc000adb340) (0xc00012b040) Stream removed, broadcasting: 1\nI0525 23:57:17.710815 1183 log.go:172] (0xc000adb340) (0xc0001ec1e0) Stream removed, broadcasting: 3\nI0525 23:57:17.710827 1183 log.go:172] (0xc000adb340) (0xc00012b5e0) Stream removed, broadcasting: 5\n" May 25 23:57:17.715: INFO: stdout: "affinity-nodeport-timeout-m2l68" May 25 23:57:32.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7139 execpod-affinitybgh6h -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31484/' May 25 23:57:32.941: INFO: stderr: "I0525 23:57:32.849035 1202 log.go:172] (0xc00092e210) (0xc000678460) Create stream\nI0525 23:57:32.849282 1202 log.go:172] (0xc00092e210) (0xc000678460) Stream added, broadcasting: 1\nI0525 23:57:32.852526 1202 log.go:172] (0xc00092e210) Reply frame received for 1\nI0525 23:57:32.852586 1202 log.go:172] (0xc00092e210) (0xc000550140) Create stream\nI0525 23:57:32.852611 1202 log.go:172] (0xc00092e210) (0xc000550140) Stream added, broadcasting: 3\nI0525 23:57:32.853514 1202 log.go:172] (0xc00092e210) Reply frame received for 3\nI0525 23:57:32.853536 1202 log.go:172] (0xc00092e210) (0xc000420c80) Create stream\nI0525 23:57:32.853546 1202 log.go:172] (0xc00092e210) (0xc000420c80) Stream added, broadcasting: 5\nI0525 23:57:32.854432 1202 log.go:172] (0xc00092e210) Reply frame received for 5\nI0525 23:57:32.925725 1202 log.go:172] (0xc00092e210) Data frame received for 5\nI0525 23:57:32.925763 1202 log.go:172] (0xc000420c80) (5) Data frame handling\nI0525 23:57:32.925783 1202 log.go:172] (0xc000420c80) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31484/\nI0525 23:57:32.931853 1202 log.go:172] (0xc00092e210) Data frame received for 3\nI0525 23:57:32.931882 1202 log.go:172] (0xc000550140) (3) Data frame handling\nI0525 23:57:32.931908 1202 log.go:172] (0xc000550140) (3) Data frame sent\nI0525 23:57:32.932262 1202 log.go:172] (0xc00092e210) Data frame received for 3\nI0525 23:57:32.932359 1202 log.go:172] (0xc000550140) (3) Data frame handling\nI0525 23:57:32.932626 1202 log.go:172] (0xc00092e210) Data frame received for 5\nI0525 23:57:32.932651 1202 log.go:172] (0xc000420c80) (5) Data frame handling\nI0525 23:57:32.934495 1202 log.go:172] (0xc00092e210) Data frame received for 1\nI0525 23:57:32.934549 1202 log.go:172] (0xc000678460) (1) Data frame handling\nI0525 23:57:32.934647 1202 log.go:172] (0xc000678460) (1) Data frame sent\nI0525 23:57:32.934708 1202 log.go:172] (0xc00092e210) (0xc000678460) Stream removed, broadcasting: 1\nI0525 23:57:32.934763 1202 log.go:172] (0xc00092e210) Go away received\nI0525 23:57:32.935164 1202 log.go:172] (0xc00092e210) (0xc000678460) Stream removed, broadcasting: 1\nI0525 23:57:32.935193 1202 log.go:172] (0xc00092e210) (0xc000550140) Stream removed, broadcasting: 3\nI0525 23:57:32.935206 1202 log.go:172] (0xc00092e210) (0xc000420c80) Stream removed, broadcasting: 5\n" May 25 23:57:32.941: INFO: stdout: "affinity-nodeport-timeout-n95zc" May 25 23:57:32.941: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-7139, will wait for the garbage collector to delete the pods May 25 23:57:33.078: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.72682ms May 25 23:57:33.878: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 800.262143ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:57:45.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7139" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:75.736 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":61,"skipped":1161,"failed":0} [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:57:45.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:57:45.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7671" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":62,"skipped":1161,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:57:45.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 25 23:57:45.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 25 23:57:45.689: INFO: stderr: "" May 25 23:57:45.689: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:57:45.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-107" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":63,"skipped":1169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:57:45.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-14548444-1d3b-4609-a52b-533f25587513 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:57:45.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6666" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":64,"skipped":1223,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:57:45.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 25 23:57:54.101: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 23:57:54.104: INFO: Pod pod-with-prestop-exec-hook still exists May 25 23:57:56.104: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 23:57:56.109: INFO: Pod pod-with-prestop-exec-hook still exists May 25 23:57:58.104: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 23:57:58.109: INFO: Pod pod-with-prestop-exec-hook still exists May 25 23:58:00.104: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 23:58:00.108: INFO: Pod pod-with-prestop-exec-hook still exists May 25 23:58:02.104: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 23:58:02.109: INFO: Pod pod-with-prestop-exec-hook still exists May 25 23:58:04.104: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 23:58:04.109: INFO: Pod pod-with-prestop-exec-hook still exists May 25 23:58:06.104: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 23:58:06.108: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:58:06.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-705" for this suite. • [SLOW TEST:20.316 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":1227,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:58:06.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 25 23:58:06.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 25 23:58:06.462: INFO: stderr: "" May 25 23:58:06.462: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:58:06.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4196" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":66,"skipped":1247,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:58:06.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:58:06.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6747" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":67,"skipped":1258,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:58:06.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 25 23:58:11.709: INFO: Successfully updated pod "labelsupdate756e1ec6-91a6-494c-b4b3-a0b341ac2994" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:58:13.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1093" for this suite. • [SLOW TEST:7.067 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":68,"skipped":1262,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:58:13.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-5c025ddf-0be7-400f-a2a2-b0f004665820 STEP: Creating a pod to test consume configMaps May 25 23:58:13.842: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d3c9a364-3f07-48c8-8876-fa46a3565064" in namespace "projected-8666" to be "Succeeded or Failed" May 25 23:58:13.909: INFO: Pod "pod-projected-configmaps-d3c9a364-3f07-48c8-8876-fa46a3565064": Phase="Pending", Reason="", readiness=false. Elapsed: 67.328353ms May 25 23:58:15.914: INFO: Pod "pod-projected-configmaps-d3c9a364-3f07-48c8-8876-fa46a3565064": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071907983s May 25 23:58:17.918: INFO: Pod "pod-projected-configmaps-d3c9a364-3f07-48c8-8876-fa46a3565064": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07657051s STEP: Saw pod success May 25 23:58:17.919: INFO: Pod "pod-projected-configmaps-d3c9a364-3f07-48c8-8876-fa46a3565064" satisfied condition "Succeeded or Failed" May 25 23:58:17.921: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-d3c9a364-3f07-48c8-8876-fa46a3565064 container projected-configmap-volume-test: STEP: delete the pod May 25 23:58:17.955: INFO: Waiting for pod pod-projected-configmaps-d3c9a364-3f07-48c8-8876-fa46a3565064 to disappear May 25 23:58:17.973: INFO: Pod pod-projected-configmaps-d3c9a364-3f07-48c8-8876-fa46a3565064 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 25 23:58:17.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8666" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":69,"skipped":1263,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 25 23:58:17.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-97ef6dfa-58c3-44da-adae-a462e321cba6 in namespace container-probe-619 May 25 23:58:22.168: INFO: Started pod liveness-97ef6dfa-58c3-44da-adae-a462e321cba6 in namespace container-probe-619 STEP: checking the pod's current state and verifying that restartCount is present May 25 23:58:22.171: INFO: Initial restart count of pod liveness-97ef6dfa-58c3-44da-adae-a462e321cba6 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:02:22.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-619" for this suite. • [SLOW TEST:244.866 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":70,"skipped":1270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:02:22.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 00:02:23.844: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 00:02:25.861: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048143, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048143, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048143, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048143, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 00:02:27.864: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048143, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048143, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048143, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048143, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 00:02:30.905: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:02:31.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2886" for this suite. STEP: Destroying namespace "webhook-2886-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.352 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":71,"skipped":1296,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:02:31.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-7895/configmap-test-1518a36e-584c-4445-8693-3fcb558d1b3b STEP: Creating a pod to test consume configMaps May 26 00:02:31.304: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ea81a21-c1d6-46f8-b856-c9fd7475cfba" in namespace "configmap-7895" to be "Succeeded or Failed" May 26 00:02:31.307: INFO: Pod "pod-configmaps-3ea81a21-c1d6-46f8-b856-c9fd7475cfba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.221126ms May 26 00:02:33.311: INFO: Pod "pod-configmaps-3ea81a21-c1d6-46f8-b856-c9fd7475cfba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007330086s May 26 00:02:35.315: INFO: Pod "pod-configmaps-3ea81a21-c1d6-46f8-b856-c9fd7475cfba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011044626s STEP: Saw pod success May 26 00:02:35.315: INFO: Pod "pod-configmaps-3ea81a21-c1d6-46f8-b856-c9fd7475cfba" satisfied condition "Succeeded or Failed" May 26 00:02:35.318: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3ea81a21-c1d6-46f8-b856-c9fd7475cfba container env-test: STEP: delete the pod May 26 00:02:35.363: INFO: Waiting for pod pod-configmaps-3ea81a21-c1d6-46f8-b856-c9fd7475cfba to disappear May 26 00:02:35.379: INFO: Pod pod-configmaps-3ea81a21-c1d6-46f8-b856-c9fd7475cfba no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:02:35.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7895" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":72,"skipped":1297,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:02:35.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:02:35.486: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:02:36.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4982" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":73,"skipped":1300,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:02:36.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-625db0c4-3811-49c9-84fa-9b377e2f43b3 STEP: Creating a pod to test consume secrets May 26 00:02:36.784: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1e5e87bb-4e9d-4915-917e-7e1a18eed1f1" in namespace "projected-6604" to be "Succeeded or Failed" May 26 00:02:36.800: INFO: Pod "pod-projected-secrets-1e5e87bb-4e9d-4915-917e-7e1a18eed1f1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.895826ms May 26 00:02:38.918: INFO: Pod "pod-projected-secrets-1e5e87bb-4e9d-4915-917e-7e1a18eed1f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134548097s May 26 00:02:40.922: INFO: Pod "pod-projected-secrets-1e5e87bb-4e9d-4915-917e-7e1a18eed1f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138676725s STEP: Saw pod success May 26 00:02:40.923: INFO: Pod "pod-projected-secrets-1e5e87bb-4e9d-4915-917e-7e1a18eed1f1" satisfied condition "Succeeded or Failed" May 26 00:02:40.926: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-1e5e87bb-4e9d-4915-917e-7e1a18eed1f1 container projected-secret-volume-test: STEP: delete the pod May 26 00:02:41.174: INFO: Waiting for pod pod-projected-secrets-1e5e87bb-4e9d-4915-917e-7e1a18eed1f1 to disappear May 26 00:02:41.243: INFO: Pod pod-projected-secrets-1e5e87bb-4e9d-4915-917e-7e1a18eed1f1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:02:41.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6604" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":74,"skipped":1301,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:02:41.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:02:45.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3015" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":75,"skipped":1316,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:02:45.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 00:02:46.156: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 00:02:48.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048166, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048166, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048166, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048166, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 00:02:51.292: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:02:51.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-418" for this suite. STEP: Destroying namespace "webhook-418-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.327 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":76,"skipped":1317,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:02:51.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 26 00:02:56.334: INFO: Successfully updated pod "adopt-release-9xr5s" STEP: Checking that the Job readopts the Pod May 26 00:02:56.334: INFO: Waiting up to 15m0s for pod "adopt-release-9xr5s" in namespace "job-9829" to be "adopted" May 26 00:02:56.346: INFO: Pod "adopt-release-9xr5s": Phase="Running", Reason="", readiness=true. Elapsed: 12.076113ms May 26 00:02:58.350: INFO: Pod "adopt-release-9xr5s": Phase="Running", Reason="", readiness=true. Elapsed: 2.016341702s May 26 00:02:58.351: INFO: Pod "adopt-release-9xr5s" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 26 00:02:58.858: INFO: Successfully updated pod "adopt-release-9xr5s" STEP: Checking that the Job releases the Pod May 26 00:02:58.858: INFO: Waiting up to 15m0s for pod "adopt-release-9xr5s" in namespace "job-9829" to be "released" May 26 00:02:58.935: INFO: Pod "adopt-release-9xr5s": Phase="Running", Reason="", readiness=true. Elapsed: 76.664433ms May 26 00:03:00.939: INFO: Pod "adopt-release-9xr5s": Phase="Running", Reason="", readiness=true. Elapsed: 2.081355428s May 26 00:03:00.939: INFO: Pod "adopt-release-9xr5s" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:03:00.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9829" for this suite. • [SLOW TEST:9.241 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":77,"skipped":1319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:03:00.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:03:01.519: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"05814fa1-02a3-4dc0-a081-5107bff081a2", Controller:(*bool)(0xc0023f07f2), BlockOwnerDeletion:(*bool)(0xc0023f07f3)}} May 26 00:03:01.558: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"65849d34-e3be-4d71-be58-26ae18c28274", Controller:(*bool)(0xc0023f0ae6), BlockOwnerDeletion:(*bool)(0xc0023f0ae7)}} May 26 00:03:01.654: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"3d2d2c79-5016-4cb3-8a92-6e551f27eba7", Controller:(*bool)(0xc0023f0da6), BlockOwnerDeletion:(*bool)(0xc0023f0da7)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:03:06.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-954" for this suite. • [SLOW TEST:5.920 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":78,"skipped":1362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:03:06.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:03:06.977: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-a6ef1f00-f588-481e-b45f-418b3cc90046" in namespace "security-context-test-326" to be "Succeeded or Failed" May 26 00:03:07.001: INFO: Pod "busybox-readonly-false-a6ef1f00-f588-481e-b45f-418b3cc90046": Phase="Pending", Reason="", readiness=false. Elapsed: 23.582646ms May 26 00:03:09.079: INFO: Pod "busybox-readonly-false-a6ef1f00-f588-481e-b45f-418b3cc90046": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101403776s May 26 00:03:11.083: INFO: Pod "busybox-readonly-false-a6ef1f00-f588-481e-b45f-418b3cc90046": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105890847s May 26 00:03:11.083: INFO: Pod "busybox-readonly-false-a6ef1f00-f588-481e-b45f-418b3cc90046" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:03:11.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-326" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":79,"skipped":1394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:03:11.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:03:11.365: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1852d295-23d6-4a3c-b646-e2d024287339" in namespace "downward-api-421" to be "Succeeded or Failed" May 26 00:03:11.385: INFO: Pod "downwardapi-volume-1852d295-23d6-4a3c-b646-e2d024287339": Phase="Pending", Reason="", readiness=false. Elapsed: 19.703521ms May 26 00:03:13.389: INFO: Pod "downwardapi-volume-1852d295-23d6-4a3c-b646-e2d024287339": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024016053s May 26 00:03:15.393: INFO: Pod "downwardapi-volume-1852d295-23d6-4a3c-b646-e2d024287339": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027716638s STEP: Saw pod success May 26 00:03:15.393: INFO: Pod "downwardapi-volume-1852d295-23d6-4a3c-b646-e2d024287339" satisfied condition "Succeeded or Failed" May 26 00:03:15.396: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1852d295-23d6-4a3c-b646-e2d024287339 container client-container: STEP: delete the pod May 26 00:03:15.428: INFO: Waiting for pod downwardapi-volume-1852d295-23d6-4a3c-b646-e2d024287339 to disappear May 26 00:03:15.479: INFO: Pod downwardapi-volume-1852d295-23d6-4a3c-b646-e2d024287339 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:03:15.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-421" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":80,"skipped":1422,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:03:15.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 26 00:03:19.585: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1304 PodName:var-expansion-00f8db27-6d09-4cb1-a784-436ab6a1ffbf ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 00:03:19.585: INFO: >>> kubeConfig: /root/.kube/config I0526 00:03:19.622023 7 log.go:172] (0xc002fe31e0) (0xc002995d60) Create stream I0526 00:03:19.622057 7 log.go:172] (0xc002fe31e0) (0xc002995d60) Stream added, broadcasting: 1 I0526 00:03:19.624007 7 log.go:172] (0xc002fe31e0) Reply frame received for 1 I0526 00:03:19.624081 7 log.go:172] (0xc002fe31e0) (0xc001bafea0) Create stream I0526 00:03:19.624094 7 log.go:172] (0xc002fe31e0) (0xc001bafea0) Stream added, broadcasting: 3 I0526 00:03:19.625352 7 log.go:172] (0xc002fe31e0) Reply frame received for 3 I0526 00:03:19.625387 7 log.go:172] (0xc002fe31e0) (0xc002995e00) Create stream I0526 00:03:19.625399 7 log.go:172] (0xc002fe31e0) (0xc002995e00) Stream added, broadcasting: 5 I0526 00:03:19.626447 7 log.go:172] (0xc002fe31e0) Reply frame received for 5 I0526 00:03:19.716407 7 log.go:172] (0xc002fe31e0) Data frame received for 5 I0526 00:03:19.716459 7 log.go:172] (0xc002995e00) (5) Data frame handling I0526 00:03:19.716490 7 log.go:172] (0xc002fe31e0) Data frame received for 3 I0526 00:03:19.716505 7 log.go:172] (0xc001bafea0) (3) Data frame handling I0526 00:03:19.718095 7 log.go:172] (0xc002fe31e0) Data frame received for 1 I0526 00:03:19.718122 7 log.go:172] (0xc002995d60) (1) Data frame handling I0526 00:03:19.718136 7 log.go:172] (0xc002995d60) (1) Data frame sent I0526 00:03:19.718145 7 log.go:172] (0xc002fe31e0) (0xc002995d60) Stream removed, broadcasting: 1 I0526 00:03:19.718157 7 log.go:172] (0xc002fe31e0) Go away received I0526 00:03:19.718232 7 log.go:172] (0xc002fe31e0) (0xc002995d60) Stream removed, broadcasting: 1 I0526 00:03:19.718254 7 log.go:172] (0xc002fe31e0) (0xc001bafea0) Stream removed, broadcasting: 3 I0526 00:03:19.718267 7 log.go:172] (0xc002fe31e0) (0xc002995e00) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 26 00:03:19.721: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1304 PodName:var-expansion-00f8db27-6d09-4cb1-a784-436ab6a1ffbf ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 00:03:19.721: INFO: >>> kubeConfig: /root/.kube/config I0526 00:03:19.753363 7 log.go:172] (0xc002b8d080) (0xc0025a4000) Create stream I0526 00:03:19.753402 7 log.go:172] (0xc002b8d080) (0xc0025a4000) Stream added, broadcasting: 1 I0526 00:03:19.764210 7 log.go:172] (0xc002b8d080) Reply frame received for 1 I0526 00:03:19.764334 7 log.go:172] (0xc002b8d080) (0xc0025a4140) Create stream I0526 00:03:19.764381 7 log.go:172] (0xc002b8d080) (0xc0025a4140) Stream added, broadcasting: 3 I0526 00:03:19.765552 7 log.go:172] (0xc002b8d080) Reply frame received for 3 I0526 00:03:19.765598 7 log.go:172] (0xc002b8d080) (0xc002995ea0) Create stream I0526 00:03:19.765610 7 log.go:172] (0xc002b8d080) (0xc002995ea0) Stream added, broadcasting: 5 I0526 00:03:19.766983 7 log.go:172] (0xc002b8d080) Reply frame received for 5 I0526 00:03:19.821396 7 log.go:172] (0xc002b8d080) Data frame received for 3 I0526 00:03:19.821456 7 log.go:172] (0xc0025a4140) (3) Data frame handling I0526 00:03:19.821501 7 log.go:172] (0xc002b8d080) Data frame received for 5 I0526 00:03:19.821527 7 log.go:172] (0xc002995ea0) (5) Data frame handling I0526 00:03:19.822990 7 log.go:172] (0xc002b8d080) Data frame received for 1 I0526 00:03:19.823016 7 log.go:172] (0xc0025a4000) (1) Data frame handling I0526 00:03:19.823031 7 log.go:172] (0xc0025a4000) (1) Data frame sent I0526 00:03:19.823058 7 log.go:172] (0xc002b8d080) (0xc0025a4000) Stream removed, broadcasting: 1 I0526 00:03:19.823157 7 log.go:172] (0xc002b8d080) Go away received I0526 00:03:19.823255 7 log.go:172] (0xc002b8d080) (0xc0025a4000) Stream removed, broadcasting: 1 I0526 00:03:19.823292 7 log.go:172] (0xc002b8d080) (0xc0025a4140) Stream removed, broadcasting: 3 I0526 00:03:19.823317 7 log.go:172] (0xc002b8d080) (0xc002995ea0) Stream removed, broadcasting: 5 STEP: updating the annotation value May 26 00:03:20.335: INFO: Successfully updated pod "var-expansion-00f8db27-6d09-4cb1-a784-436ab6a1ffbf" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 26 00:03:20.361: INFO: Deleting pod "var-expansion-00f8db27-6d09-4cb1-a784-436ab6a1ffbf" in namespace "var-expansion-1304" May 26 00:03:20.366: INFO: Wait up to 5m0s for pod "var-expansion-00f8db27-6d09-4cb1-a784-436ab6a1ffbf" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:04:06.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1304" for this suite. • [SLOW TEST:50.910 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":81,"skipped":1441,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:04:06.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7275 STEP: creating a selector STEP: Creating the service pods in kubernetes May 26 00:04:06.442: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 26 00:04:06.551: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 26 00:04:08.555: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 26 00:04:10.555: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 26 00:04:12.556: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 00:04:14.559: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 00:04:16.556: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 00:04:18.555: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 00:04:20.556: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 00:04:22.556: INFO: The status of Pod netserver-0 is Running (Ready = true) May 26 00:04:22.562: INFO: The status of Pod netserver-1 is Running (Ready = false) May 26 00:04:24.566: INFO: The status of Pod netserver-1 is Running (Ready = false) May 26 00:04:26.567: INFO: The status of Pod netserver-1 is Running (Ready = false) May 26 00:04:28.567: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 26 00:04:32.610: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.114:8080/dial?request=hostname&protocol=udp&host=10.244.1.113&port=8081&tries=1'] Namespace:pod-network-test-7275 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 00:04:32.610: INFO: >>> kubeConfig: /root/.kube/config I0526 00:04:32.651622 7 log.go:172] (0xc002b26000) (0xc002ae94a0) Create stream I0526 00:04:32.651650 7 log.go:172] (0xc002b26000) (0xc002ae94a0) Stream added, broadcasting: 1 I0526 00:04:32.653759 7 log.go:172] (0xc002b26000) Reply frame received for 1 I0526 00:04:32.653818 7 log.go:172] (0xc002b26000) (0xc0017c88c0) Create stream I0526 00:04:32.653846 7 log.go:172] (0xc002b26000) (0xc0017c88c0) Stream added, broadcasting: 3 I0526 00:04:32.654777 7 log.go:172] (0xc002b26000) Reply frame received for 3 I0526 00:04:32.654814 7 log.go:172] (0xc002b26000) (0xc0011b81e0) Create stream I0526 00:04:32.654828 7 log.go:172] (0xc002b26000) (0xc0011b81e0) Stream added, broadcasting: 5 I0526 00:04:32.655657 7 log.go:172] (0xc002b26000) Reply frame received for 5 I0526 00:04:32.740685 7 log.go:172] (0xc002b26000) Data frame received for 3 I0526 00:04:32.740792 7 log.go:172] (0xc0017c88c0) (3) Data frame handling I0526 00:04:32.740842 7 log.go:172] (0xc0017c88c0) (3) Data frame sent I0526 00:04:32.741281 7 log.go:172] (0xc002b26000) Data frame received for 3 I0526 00:04:32.741318 7 log.go:172] (0xc0017c88c0) (3) Data frame handling I0526 00:04:32.741351 7 log.go:172] (0xc002b26000) Data frame received for 5 I0526 00:04:32.741367 7 log.go:172] (0xc0011b81e0) (5) Data frame handling I0526 00:04:32.743019 7 log.go:172] (0xc002b26000) Data frame received for 1 I0526 00:04:32.743095 7 log.go:172] (0xc002ae94a0) (1) Data frame handling I0526 00:04:32.743157 7 log.go:172] (0xc002ae94a0) (1) Data frame sent I0526 00:04:32.743194 7 log.go:172] (0xc002b26000) (0xc002ae94a0) Stream removed, broadcasting: 1 I0526 00:04:32.743218 7 log.go:172] (0xc002b26000) Go away received I0526 00:04:32.743348 7 log.go:172] (0xc002b26000) (0xc002ae94a0) Stream removed, broadcasting: 1 I0526 00:04:32.743386 7 log.go:172] (0xc002b26000) (0xc0017c88c0) Stream removed, broadcasting: 3 I0526 00:04:32.743411 7 log.go:172] (0xc002b26000) (0xc0011b81e0) Stream removed, broadcasting: 5 May 26 00:04:32.743: INFO: Waiting for responses: map[] May 26 00:04:32.747: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.114:8080/dial?request=hostname&protocol=udp&host=10.244.2.112&port=8081&tries=1'] Namespace:pod-network-test-7275 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 00:04:32.747: INFO: >>> kubeConfig: /root/.kube/config I0526 00:04:32.777685 7 log.go:172] (0xc002fe2420) (0xc002ae97c0) Create stream I0526 00:04:32.777712 7 log.go:172] (0xc002fe2420) (0xc002ae97c0) Stream added, broadcasting: 1 I0526 00:04:32.779309 7 log.go:172] (0xc002fe2420) Reply frame received for 1 I0526 00:04:32.779350 7 log.go:172] (0xc002fe2420) (0xc0017c8aa0) Create stream I0526 00:04:32.779366 7 log.go:172] (0xc002fe2420) (0xc0017c8aa0) Stream added, broadcasting: 3 I0526 00:04:32.780200 7 log.go:172] (0xc002fe2420) Reply frame received for 3 I0526 00:04:32.780240 7 log.go:172] (0xc002fe2420) (0xc002586140) Create stream I0526 00:04:32.780257 7 log.go:172] (0xc002fe2420) (0xc002586140) Stream added, broadcasting: 5 I0526 00:04:32.781385 7 log.go:172] (0xc002fe2420) Reply frame received for 5 I0526 00:04:32.853417 7 log.go:172] (0xc002fe2420) Data frame received for 3 I0526 00:04:32.853453 7 log.go:172] (0xc0017c8aa0) (3) Data frame handling I0526 00:04:32.853467 7 log.go:172] (0xc0017c8aa0) (3) Data frame sent I0526 00:04:32.854030 7 log.go:172] (0xc002fe2420) Data frame received for 5 I0526 00:04:32.854054 7 log.go:172] (0xc002586140) (5) Data frame handling I0526 00:04:32.854247 7 log.go:172] (0xc002fe2420) Data frame received for 3 I0526 00:04:32.854283 7 log.go:172] (0xc0017c8aa0) (3) Data frame handling I0526 00:04:32.855991 7 log.go:172] (0xc002fe2420) Data frame received for 1 I0526 00:04:32.856010 7 log.go:172] (0xc002ae97c0) (1) Data frame handling I0526 00:04:32.856030 7 log.go:172] (0xc002ae97c0) (1) Data frame sent I0526 00:04:32.856059 7 log.go:172] (0xc002fe2420) (0xc002ae97c0) Stream removed, broadcasting: 1 I0526 00:04:32.856078 7 log.go:172] (0xc002fe2420) Go away received I0526 00:04:32.856157 7 log.go:172] (0xc002fe2420) (0xc002ae97c0) Stream removed, broadcasting: 1 I0526 00:04:32.856184 7 log.go:172] (0xc002fe2420) (0xc0017c8aa0) Stream removed, broadcasting: 3 I0526 00:04:32.856201 7 log.go:172] (0xc002fe2420) (0xc002586140) Stream removed, broadcasting: 5 May 26 00:04:32.856: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:04:32.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7275" for this suite. • [SLOW TEST:26.469 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":82,"skipped":1442,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:04:32.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-z5t65 in namespace proxy-9543 I0526 00:04:33.103360 7 runners.go:190] Created replication controller with name: proxy-service-z5t65, namespace: proxy-9543, replica count: 1 I0526 00:04:34.153839 7 runners.go:190] proxy-service-z5t65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:04:35.154108 7 runners.go:190] proxy-service-z5t65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:04:36.154293 7 runners.go:190] proxy-service-z5t65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:04:37.154562 7 runners.go:190] proxy-service-z5t65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 00:04:38.154843 7 runners.go:190] proxy-service-z5t65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 00:04:39.155091 7 runners.go:190] proxy-service-z5t65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 00:04:40.155378 7 runners.go:190] proxy-service-z5t65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 00:04:41.155577 7 runners.go:190] proxy-service-z5t65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 00:04:42.155815 7 runners.go:190] proxy-service-z5t65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 00:04:43.156057 7 runners.go:190] proxy-service-z5t65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 00:04:44.156288 7 runners.go:190] proxy-service-z5t65 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 00:04:44.160: INFO: setup took 11.203724329s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 26 00:04:44.167: INFO: (0) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 7.18165ms) May 26 00:04:44.167: INFO: (0) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 7.366764ms) May 26 00:04:44.167: INFO: (0) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:1080/proxy/: ... (200; 7.290954ms) May 26 00:04:44.167: INFO: (0) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 7.270836ms) May 26 00:04:44.167: INFO: (0) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 7.277675ms) May 26 00:04:44.167: INFO: (0) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 7.379143ms) May 26 00:04:44.167: INFO: (0) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 7.507191ms) May 26 00:04:44.168: INFO: (0) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 8.055044ms) May 26 00:04:44.168: INFO: (0) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 8.251567ms) May 26 00:04:44.178: INFO: (0) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: ... (200; 4.070095ms) May 26 00:04:44.185: INFO: (1) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 4.697467ms) May 26 00:04:44.186: INFO: (1) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 5.769104ms) May 26 00:04:44.186: INFO: (1) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 6.083093ms) May 26 00:04:44.186: INFO: (1) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 6.021709ms) May 26 00:04:44.186: INFO: (1) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 6.003759ms) May 26 00:04:44.186: INFO: (1) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 6.00728ms) May 26 00:04:44.187: INFO: (1) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 6.277205ms) May 26 00:04:44.187: INFO: (1) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 6.262026ms) May 26 00:04:44.187: INFO: (1) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: test<... (200; 6.487995ms) May 26 00:04:44.187: INFO: (1) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 6.383957ms) May 26 00:04:44.187: INFO: (1) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 6.431944ms) May 26 00:04:44.187: INFO: (1) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 6.369188ms) May 26 00:04:44.187: INFO: (1) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname2/proxy/: tls qux (200; 6.623905ms) May 26 00:04:44.191: INFO: (2) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 3.724711ms) May 26 00:04:44.191: INFO: (2) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 4.312905ms) May 26 00:04:44.192: INFO: (2) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 4.834305ms) May 26 00:04:44.192: INFO: (2) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 4.939975ms) May 26 00:04:44.192: INFO: (2) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 4.912233ms) May 26 00:04:44.192: INFO: (2) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 5.091229ms) May 26 00:04:44.192: INFO: (2) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 5.343001ms) May 26 00:04:44.192: INFO: (2) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: ... (200; 5.313896ms) May 26 00:04:44.193: INFO: (2) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname2/proxy/: tls qux (200; 5.471807ms) May 26 00:04:44.193: INFO: (2) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 5.5504ms) May 26 00:04:44.193: INFO: (2) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 5.51239ms) May 26 00:04:44.193: INFO: (2) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 5.502163ms) May 26 00:04:44.193: INFO: (2) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 5.823819ms) May 26 00:04:44.193: INFO: (2) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 5.785297ms) May 26 00:04:44.193: INFO: (2) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 6.025623ms) May 26 00:04:44.195: INFO: (3) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 1.861223ms) May 26 00:04:44.197: INFO: (3) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 4.199355ms) May 26 00:04:44.197: INFO: (3) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:1080/proxy/: ... (200; 4.352661ms) May 26 00:04:44.198: INFO: (3) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 4.48255ms) May 26 00:04:44.198: INFO: (3) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 4.676592ms) May 26 00:04:44.198: INFO: (3) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 4.743559ms) May 26 00:04:44.198: INFO: (3) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 4.748612ms) May 26 00:04:44.198: INFO: (3) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 4.706656ms) May 26 00:04:44.198: INFO: (3) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 4.794888ms) May 26 00:04:44.198: INFO: (3) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: ... (200; 5.175807ms) May 26 00:04:44.205: INFO: (4) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 5.35319ms) May 26 00:04:44.205: INFO: (4) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 5.202255ms) May 26 00:04:44.205: INFO: (4) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: test<... (200; 5.421357ms) May 26 00:04:44.205: INFO: (4) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 5.438785ms) May 26 00:04:44.205: INFO: (4) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 5.449373ms) May 26 00:04:44.208: INFO: (5) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 3.30986ms) May 26 00:04:44.208: INFO: (5) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 3.388618ms) May 26 00:04:44.208: INFO: (5) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: ... (200; 3.434099ms) May 26 00:04:44.208: INFO: (5) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 3.585075ms) May 26 00:04:44.208: INFO: (5) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 3.53976ms) May 26 00:04:44.208: INFO: (5) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 3.549447ms) May 26 00:04:44.208: INFO: (5) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 3.541963ms) May 26 00:04:44.209: INFO: (5) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 3.507933ms) May 26 00:04:44.208: INFO: (5) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 3.515919ms) May 26 00:04:44.209: INFO: (5) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname2/proxy/: tls qux (200; 4.465518ms) May 26 00:04:44.209: INFO: (5) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 4.506808ms) May 26 00:04:44.210: INFO: (5) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 4.771883ms) May 26 00:04:44.210: INFO: (5) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 4.74504ms) May 26 00:04:44.210: INFO: (5) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 4.954407ms) May 26 00:04:44.210: INFO: (5) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 5.267612ms) May 26 00:04:44.214: INFO: (6) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: ... (200; 4.381624ms) May 26 00:04:44.215: INFO: (6) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 4.468209ms) May 26 00:04:44.215: INFO: (6) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 4.489463ms) May 26 00:04:44.215: INFO: (6) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 4.455583ms) May 26 00:04:44.215: INFO: (6) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 4.522305ms) May 26 00:04:44.215: INFO: (6) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 4.503274ms) May 26 00:04:44.215: INFO: (6) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 4.630057ms) May 26 00:04:44.215: INFO: (6) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 4.948663ms) May 26 00:04:44.215: INFO: (6) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 4.92532ms) May 26 00:04:44.215: INFO: (6) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname2/proxy/: tls qux (200; 5.070153ms) May 26 00:04:44.218: INFO: (7) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 2.978065ms) May 26 00:04:44.219: INFO: (7) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 2.99765ms) May 26 00:04:44.220: INFO: (7) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 3.927511ms) May 26 00:04:44.220: INFO: (7) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 4.626089ms) May 26 00:04:44.221: INFO: (7) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:1080/proxy/: ... (200; 4.978188ms) May 26 00:04:44.221: INFO: (7) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 4.90595ms) May 26 00:04:44.221: INFO: (7) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 4.906302ms) May 26 00:04:44.221: INFO: (7) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 4.971308ms) May 26 00:04:44.221: INFO: (7) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: test<... (200; 8.095575ms) May 26 00:04:44.230: INFO: (8) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:1080/proxy/: ... (200; 8.133491ms) May 26 00:04:44.230: INFO: (8) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 8.251796ms) May 26 00:04:44.230: INFO: (8) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 8.353732ms) May 26 00:04:44.230: INFO: (8) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 8.286419ms) May 26 00:04:44.231: INFO: (8) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 8.574985ms) May 26 00:04:44.232: INFO: (8) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 10.251722ms) May 26 00:04:44.232: INFO: (8) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 10.254515ms) May 26 00:04:44.232: INFO: (8) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 10.204872ms) May 26 00:04:44.233: INFO: (8) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname2/proxy/: tls qux (200; 10.455558ms) May 26 00:04:44.233: INFO: (8) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 10.826696ms) May 26 00:04:44.233: INFO: (8) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 10.838137ms) May 26 00:04:44.233: INFO: (8) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 10.838285ms) May 26 00:04:44.233: INFO: (8) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 10.886709ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 5.411261ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 5.513141ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 5.522541ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 5.621529ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 5.622441ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 5.616653ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 5.578237ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 5.619851ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 5.998569ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 5.92161ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: ... (200; 6.205804ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 6.130516ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 6.227597ms) May 26 00:04:44.239: INFO: (9) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 6.195419ms) May 26 00:04:44.243: INFO: (10) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 3.175421ms) May 26 00:04:44.243: INFO: (10) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 3.604108ms) May 26 00:04:44.243: INFO: (10) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 3.652082ms) May 26 00:04:44.243: INFO: (10) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 3.921308ms) May 26 00:04:44.243: INFO: (10) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:1080/proxy/: ... (200; 3.88687ms) May 26 00:04:44.243: INFO: (10) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: test<... (200; 4.470949ms) May 26 00:04:44.251: INFO: (11) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 4.523096ms) May 26 00:04:44.251: INFO: (11) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 4.4769ms) May 26 00:04:44.251: INFO: (11) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 4.505073ms) May 26 00:04:44.251: INFO: (11) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 4.494256ms) May 26 00:04:44.251: INFO: (11) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 4.484022ms) May 26 00:04:44.251: INFO: (11) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 4.526842ms) May 26 00:04:44.251: INFO: (11) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: ... (200; 4.680943ms) May 26 00:04:44.252: INFO: (11) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 5.368637ms) May 26 00:04:44.252: INFO: (11) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 5.329945ms) May 26 00:04:44.252: INFO: (11) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 5.376264ms) May 26 00:04:44.252: INFO: (11) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 5.604248ms) May 26 00:04:44.253: INFO: (11) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 5.623152ms) May 26 00:04:44.253: INFO: (11) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname2/proxy/: tls qux (200; 5.644848ms) May 26 00:04:44.257: INFO: (12) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 4.501373ms) May 26 00:04:44.257: INFO: (12) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname2/proxy/: tls qux (200; 4.369839ms) May 26 00:04:44.257: INFO: (12) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 4.393994ms) May 26 00:04:44.257: INFO: (12) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 4.478597ms) May 26 00:04:44.257: INFO: (12) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 4.588437ms) May 26 00:04:44.257: INFO: (12) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 4.667975ms) May 26 00:04:44.258: INFO: (12) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:1080/proxy/: ... (200; 4.75465ms) May 26 00:04:44.258: INFO: (12) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 5.180358ms) May 26 00:04:44.258: INFO: (12) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 5.011132ms) May 26 00:04:44.258: INFO: (12) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 5.507136ms) May 26 00:04:44.258: INFO: (12) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 5.199415ms) May 26 00:04:44.258: INFO: (12) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 5.058736ms) May 26 00:04:44.258: INFO: (12) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 5.397418ms) May 26 00:04:44.258: INFO: (12) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 5.199371ms) May 26 00:04:44.258: INFO: (12) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: ... (200; 4.964238ms) May 26 00:04:44.264: INFO: (13) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 4.855571ms) May 26 00:04:44.264: INFO: (13) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 4.665823ms) May 26 00:04:44.264: INFO: (13) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 4.615315ms) May 26 00:04:44.264: INFO: (13) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 4.564734ms) May 26 00:04:44.264: INFO: (13) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname2/proxy/: tls qux (200; 5.354842ms) May 26 00:04:44.264: INFO: (13) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 5.324395ms) May 26 00:04:44.264: INFO: (13) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 4.986623ms) May 26 00:04:44.264: INFO: (13) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 5.08413ms) May 26 00:04:44.264: INFO: (13) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 5.682609ms) May 26 00:04:44.269: INFO: (14) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 4.730221ms) May 26 00:04:44.269: INFO: (14) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 4.919603ms) May 26 00:04:44.270: INFO: (14) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:1080/proxy/: ... (200; 5.008202ms) May 26 00:04:44.270: INFO: (14) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 5.063745ms) May 26 00:04:44.270: INFO: (14) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 5.160889ms) May 26 00:04:44.270: INFO: (14) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 5.471265ms) May 26 00:04:44.270: INFO: (14) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 5.359567ms) May 26 00:04:44.270: INFO: (14) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 5.433556ms) May 26 00:04:44.270: INFO: (14) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: test<... (200; 4.753038ms) May 26 00:04:44.276: INFO: (15) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 4.897568ms) May 26 00:04:44.276: INFO: (15) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 5.067906ms) May 26 00:04:44.276: INFO: (15) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:1080/proxy/: ... (200; 5.017316ms) May 26 00:04:44.276: INFO: (15) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 5.082738ms) May 26 00:04:44.276: INFO: (15) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 5.031577ms) May 26 00:04:44.276: INFO: (15) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 5.027565ms) May 26 00:04:44.276: INFO: (15) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 5.110759ms) May 26 00:04:44.276: INFO: (15) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 5.158589ms) May 26 00:04:44.276: INFO: (15) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 5.131616ms) May 26 00:04:44.279: INFO: (16) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: test<... (200; 3.801179ms) May 26 00:04:44.280: INFO: (16) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 3.726682ms) May 26 00:04:44.280: INFO: (16) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 4.017401ms) May 26 00:04:44.281: INFO: (16) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 4.075702ms) May 26 00:04:44.281: INFO: (16) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 4.074912ms) May 26 00:04:44.281: INFO: (16) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 4.779642ms) May 26 00:04:44.285: INFO: (16) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:1080/proxy/: ... (200; 8.383668ms) May 26 00:04:44.285: INFO: (16) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 8.495027ms) May 26 00:04:44.285: INFO: (16) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 8.447581ms) May 26 00:04:44.285: INFO: (16) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 8.49771ms) May 26 00:04:44.285: INFO: (16) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 8.50999ms) May 26 00:04:44.285: INFO: (16) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname2/proxy/: tls qux (200; 8.509435ms) May 26 00:04:44.285: INFO: (16) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 8.556973ms) May 26 00:04:44.288: INFO: (17) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 3.2178ms) May 26 00:04:44.288: INFO: (17) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 3.264379ms) May 26 00:04:44.289: INFO: (17) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 3.479866ms) May 26 00:04:44.289: INFO: (17) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 3.448652ms) May 26 00:04:44.289: INFO: (17) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 3.698412ms) May 26 00:04:44.289: INFO: (17) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: ... (200; 3.721211ms) May 26 00:04:44.289: INFO: (17) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 3.700173ms) May 26 00:04:44.289: INFO: (17) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 3.734313ms) May 26 00:04:44.289: INFO: (17) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 3.929211ms) May 26 00:04:44.290: INFO: (17) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 4.72365ms) May 26 00:04:44.290: INFO: (17) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 4.698637ms) May 26 00:04:44.290: INFO: (17) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 4.690984ms) May 26 00:04:44.290: INFO: (17) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 4.797163ms) May 26 00:04:44.290: INFO: (17) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname2/proxy/: tls qux (200; 4.809274ms) May 26 00:04:44.290: INFO: (17) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 4.890606ms) May 26 00:04:44.293: INFO: (18) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 2.573613ms) May 26 00:04:44.293: INFO: (18) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 2.754703ms) May 26 00:04:44.293: INFO: (18) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 3.137146ms) May 26 00:04:44.293: INFO: (18) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:1080/proxy/: ... (200; 3.120867ms) May 26 00:04:44.293: INFO: (18) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:160/proxy/: foo (200; 3.194267ms) May 26 00:04:44.293: INFO: (18) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 3.107184ms) May 26 00:04:44.293: INFO: (18) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 3.177338ms) May 26 00:04:44.293: INFO: (18) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: test (200; 3.892508ms) May 26 00:04:44.294: INFO: (18) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 4.036074ms) May 26 00:04:44.295: INFO: (18) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 4.788316ms) May 26 00:04:44.295: INFO: (18) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 4.908566ms) May 26 00:04:44.295: INFO: (18) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname2/proxy/: tls qux (200; 4.979106ms) May 26 00:04:44.295: INFO: (18) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 5.044201ms) May 26 00:04:44.295: INFO: (18) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 5.03484ms) May 26 00:04:44.299: INFO: (19) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 3.95172ms) May 26 00:04:44.299: INFO: (19) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:443/proxy/: ... (200; 3.9467ms) May 26 00:04:44.299: INFO: (19) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:462/proxy/: tls qux (200; 3.984771ms) May 26 00:04:44.299: INFO: (19) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr/proxy/: test (200; 3.939131ms) May 26 00:04:44.299: INFO: (19) /api/v1/namespaces/proxy-9543/pods/http:proxy-service-z5t65-kz8nr:162/proxy/: bar (200; 3.960616ms) May 26 00:04:44.299: INFO: (19) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname1/proxy/: foo (200; 3.996446ms) May 26 00:04:44.299: INFO: (19) /api/v1/namespaces/proxy-9543/pods/https:proxy-service-z5t65-kz8nr:460/proxy/: tls baz (200; 4.011375ms) May 26 00:04:44.299: INFO: (19) /api/v1/namespaces/proxy-9543/pods/proxy-service-z5t65-kz8nr:1080/proxy/: test<... (200; 4.044777ms) May 26 00:04:44.300: INFO: (19) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname1/proxy/: tls baz (200; 4.635959ms) May 26 00:04:44.300: INFO: (19) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname2/proxy/: bar (200; 4.712208ms) May 26 00:04:44.301: INFO: (19) /api/v1/namespaces/proxy-9543/services/proxy-service-z5t65:portname2/proxy/: bar (200; 5.651023ms) May 26 00:04:44.301: INFO: (19) /api/v1/namespaces/proxy-9543/services/https:proxy-service-z5t65:tlsportname2/proxy/: tls qux (200; 5.631777ms) May 26 00:04:44.301: INFO: (19) /api/v1/namespaces/proxy-9543/services/http:proxy-service-z5t65:portname1/proxy/: foo (200; 5.698714ms) STEP: deleting ReplicationController proxy-service-z5t65 in namespace proxy-9543, will wait for the garbage collector to delete the pods May 26 00:04:44.359: INFO: Deleting ReplicationController proxy-service-z5t65 took: 6.449908ms May 26 00:04:44.660: INFO: Terminating ReplicationController proxy-service-z5t65 pods took: 300.318133ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:04:54.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9543" for this suite. • [SLOW TEST:22.104 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":83,"skipped":1452,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:04:54.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:04:55.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65f01c59-5d96-41c0-937b-d5f3260363d2" in namespace "downward-api-4259" to be "Succeeded or Failed" May 26 00:04:55.094: INFO: Pod "downwardapi-volume-65f01c59-5d96-41c0-937b-d5f3260363d2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.034479ms May 26 00:04:57.115: INFO: Pod "downwardapi-volume-65f01c59-5d96-41c0-937b-d5f3260363d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033081455s May 26 00:04:59.151: INFO: Pod "downwardapi-volume-65f01c59-5d96-41c0-937b-d5f3260363d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069492656s STEP: Saw pod success May 26 00:04:59.151: INFO: Pod "downwardapi-volume-65f01c59-5d96-41c0-937b-d5f3260363d2" satisfied condition "Succeeded or Failed" May 26 00:04:59.154: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-65f01c59-5d96-41c0-937b-d5f3260363d2 container client-container: STEP: delete the pod May 26 00:04:59.345: INFO: Waiting for pod downwardapi-volume-65f01c59-5d96-41c0-937b-d5f3260363d2 to disappear May 26 00:04:59.357: INFO: Pod downwardapi-volume-65f01c59-5d96-41c0-937b-d5f3260363d2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:04:59.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4259" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":84,"skipped":1521,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:04:59.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 26 00:05:03.644: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:05:03.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6736" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":85,"skipped":1522,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:05:03.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:05:03.862: INFO: Waiting up to 5m0s for pod "busybox-user-65534-299a2e2e-b9be-40ac-af94-5fdb30a4a0d6" in namespace "security-context-test-4198" to be "Succeeded or Failed" May 26 00:05:03.865: INFO: Pod "busybox-user-65534-299a2e2e-b9be-40ac-af94-5fdb30a4a0d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.468366ms May 26 00:05:06.141: INFO: Pod "busybox-user-65534-299a2e2e-b9be-40ac-af94-5fdb30a4a0d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279034566s May 26 00:05:08.146: INFO: Pod "busybox-user-65534-299a2e2e-b9be-40ac-af94-5fdb30a4a0d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.283354845s May 26 00:05:08.146: INFO: Pod "busybox-user-65534-299a2e2e-b9be-40ac-af94-5fdb30a4a0d6" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:05:08.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4198" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1553,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:05:08.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 00:05:08.936: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 00:05:10.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048308, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048308, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048309, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048308, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 00:05:13.992: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:05:24.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3605" for this suite. STEP: Destroying namespace "webhook-3605-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.217 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":87,"skipped":1574,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:05:24.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:05:24.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c87884a-6019-4ea6-ac63-a5799fef1edf" in namespace "projected-4982" to be "Succeeded or Failed" May 26 00:05:24.458: INFO: Pod "downwardapi-volume-1c87884a-6019-4ea6-ac63-a5799fef1edf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.459676ms May 26 00:05:26.462: INFO: Pod "downwardapi-volume-1c87884a-6019-4ea6-ac63-a5799fef1edf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017497152s May 26 00:05:28.467: INFO: Pod "downwardapi-volume-1c87884a-6019-4ea6-ac63-a5799fef1edf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02212461s STEP: Saw pod success May 26 00:05:28.467: INFO: Pod "downwardapi-volume-1c87884a-6019-4ea6-ac63-a5799fef1edf" satisfied condition "Succeeded or Failed" May 26 00:05:28.470: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1c87884a-6019-4ea6-ac63-a5799fef1edf container client-container: STEP: delete the pod May 26 00:05:28.632: INFO: Waiting for pod downwardapi-volume-1c87884a-6019-4ea6-ac63-a5799fef1edf to disappear May 26 00:05:28.664: INFO: Pod downwardapi-volume-1c87884a-6019-4ea6-ac63-a5799fef1edf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:05:28.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4982" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":88,"skipped":1600,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:05:28.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:05:35.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4740" for this suite. • [SLOW TEST:7.109 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":89,"skipped":1606,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:05:35.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 26 00:05:35.880: INFO: Waiting up to 5m0s for pod "pod-29f02d1d-fc13-4953-970e-e38a679b7637" in namespace "emptydir-9236" to be "Succeeded or Failed" May 26 00:05:35.914: INFO: Pod "pod-29f02d1d-fc13-4953-970e-e38a679b7637": Phase="Pending", Reason="", readiness=false. Elapsed: 33.734123ms May 26 00:05:37.918: INFO: Pod "pod-29f02d1d-fc13-4953-970e-e38a679b7637": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038093136s May 26 00:05:39.923: INFO: Pod "pod-29f02d1d-fc13-4953-970e-e38a679b7637": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042402256s STEP: Saw pod success May 26 00:05:39.923: INFO: Pod "pod-29f02d1d-fc13-4953-970e-e38a679b7637" satisfied condition "Succeeded or Failed" May 26 00:05:39.926: INFO: Trying to get logs from node latest-worker pod pod-29f02d1d-fc13-4953-970e-e38a679b7637 container test-container: STEP: delete the pod May 26 00:05:39.979: INFO: Waiting for pod pod-29f02d1d-fc13-4953-970e-e38a679b7637 to disappear May 26 00:05:39.992: INFO: Pod pod-29f02d1d-fc13-4953-970e-e38a679b7637 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:05:39.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9236" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":90,"skipped":1608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:05:40.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 26 00:05:40.127: INFO: Waiting up to 5m0s for pod "pod-826276cf-6dbc-46a0-bb70-76a7393760b7" in namespace "emptydir-9570" to be "Succeeded or Failed" May 26 00:05:40.130: INFO: Pod "pod-826276cf-6dbc-46a0-bb70-76a7393760b7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.012178ms May 26 00:05:42.135: INFO: Pod "pod-826276cf-6dbc-46a0-bb70-76a7393760b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007337169s May 26 00:05:44.139: INFO: Pod "pod-826276cf-6dbc-46a0-bb70-76a7393760b7": Phase="Running", Reason="", readiness=true. Elapsed: 4.011566045s May 26 00:05:46.144: INFO: Pod "pod-826276cf-6dbc-46a0-bb70-76a7393760b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016311997s STEP: Saw pod success May 26 00:05:46.144: INFO: Pod "pod-826276cf-6dbc-46a0-bb70-76a7393760b7" satisfied condition "Succeeded or Failed" May 26 00:05:46.146: INFO: Trying to get logs from node latest-worker2 pod pod-826276cf-6dbc-46a0-bb70-76a7393760b7 container test-container: STEP: delete the pod May 26 00:05:46.200: INFO: Waiting for pod pod-826276cf-6dbc-46a0-bb70-76a7393760b7 to disappear May 26 00:05:46.209: INFO: Pod pod-826276cf-6dbc-46a0-bb70-76a7393760b7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:05:46.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9570" for this suite. • [SLOW TEST:6.216 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":91,"skipped":1640,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:05:46.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0526 00:05:47.389611 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 00:05:47.389: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:05:47.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4853" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":92,"skipped":1649,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:05:47.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 26 00:05:47.562: INFO: Waiting up to 5m0s for pod "pod-9b2a2e96-5246-4597-ae82-043a8736fab6" in namespace "emptydir-1756" to be "Succeeded or Failed" May 26 00:05:47.606: INFO: Pod "pod-9b2a2e96-5246-4597-ae82-043a8736fab6": Phase="Pending", Reason="", readiness=false. Elapsed: 44.191254ms May 26 00:05:49.622: INFO: Pod "pod-9b2a2e96-5246-4597-ae82-043a8736fab6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0596651s May 26 00:05:51.666: INFO: Pod "pod-9b2a2e96-5246-4597-ae82-043a8736fab6": Phase="Running", Reason="", readiness=true. Elapsed: 4.104318647s May 26 00:05:53.671: INFO: Pod "pod-9b2a2e96-5246-4597-ae82-043a8736fab6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109499725s STEP: Saw pod success May 26 00:05:53.672: INFO: Pod "pod-9b2a2e96-5246-4597-ae82-043a8736fab6" satisfied condition "Succeeded or Failed" May 26 00:05:53.675: INFO: Trying to get logs from node latest-worker pod pod-9b2a2e96-5246-4597-ae82-043a8736fab6 container test-container: STEP: delete the pod May 26 00:05:53.701: INFO: Waiting for pod pod-9b2a2e96-5246-4597-ae82-043a8736fab6 to disappear May 26 00:05:53.705: INFO: Pod pod-9b2a2e96-5246-4597-ae82-043a8736fab6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:05:53.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1756" for this suite. • [SLOW TEST:6.316 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":93,"skipped":1681,"failed":0} [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:05:53.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:05:57.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2777" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":94,"skipped":1681,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:05:57.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-5xps STEP: Creating a pod to test atomic-volume-subpath May 26 00:05:58.045: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5xps" in namespace "subpath-9919" to be "Succeeded or Failed" May 26 00:05:58.115: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Pending", Reason="", readiness=false. Elapsed: 69.93225ms May 26 00:06:00.119: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073673061s May 26 00:06:02.123: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Running", Reason="", readiness=true. Elapsed: 4.077389121s May 26 00:06:04.127: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Running", Reason="", readiness=true. Elapsed: 6.081568267s May 26 00:06:06.149: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Running", Reason="", readiness=true. Elapsed: 8.103766849s May 26 00:06:08.153: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Running", Reason="", readiness=true. Elapsed: 10.107758369s May 26 00:06:10.157: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Running", Reason="", readiness=true. Elapsed: 12.112176223s May 26 00:06:12.162: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Running", Reason="", readiness=true. Elapsed: 14.116250136s May 26 00:06:14.166: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Running", Reason="", readiness=true. Elapsed: 16.121016087s May 26 00:06:16.200: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Running", Reason="", readiness=true. Elapsed: 18.154379795s May 26 00:06:18.204: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Running", Reason="", readiness=true. Elapsed: 20.158714146s May 26 00:06:20.209: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Running", Reason="", readiness=true. Elapsed: 22.163829564s May 26 00:06:22.214: INFO: Pod "pod-subpath-test-projected-5xps": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.168443947s STEP: Saw pod success May 26 00:06:22.214: INFO: Pod "pod-subpath-test-projected-5xps" satisfied condition "Succeeded or Failed" May 26 00:06:22.220: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-5xps container test-container-subpath-projected-5xps: STEP: delete the pod May 26 00:06:22.291: INFO: Waiting for pod pod-subpath-test-projected-5xps to disappear May 26 00:06:22.298: INFO: Pod pod-subpath-test-projected-5xps no longer exists STEP: Deleting pod pod-subpath-test-projected-5xps May 26 00:06:22.298: INFO: Deleting pod "pod-subpath-test-projected-5xps" in namespace "subpath-9919" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:06:22.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9919" for this suite. • [SLOW TEST:24.441 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":95,"skipped":1697,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:06:22.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 26 00:06:35.243: INFO: 5 pods remaining May 26 00:06:35.243: INFO: 5 pods has nil DeletionTimestamp May 26 00:06:35.243: INFO: STEP: Gathering metrics W0526 00:06:39.909095 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 00:06:39.909: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:06:39.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6707" for this suite. • [SLOW TEST:17.600 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":96,"skipped":1703,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:06:39.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:06:39.972: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3c5f95a-3392-4b0e-9326-251fe06ecb60" in namespace "downward-api-3752" to be "Succeeded or Failed" May 26 00:06:40.026: INFO: Pod "downwardapi-volume-d3c5f95a-3392-4b0e-9326-251fe06ecb60": Phase="Pending", Reason="", readiness=false. Elapsed: 54.049461ms May 26 00:06:42.029: INFO: Pod "downwardapi-volume-d3c5f95a-3392-4b0e-9326-251fe06ecb60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056956402s May 26 00:06:44.032: INFO: Pod "downwardapi-volume-d3c5f95a-3392-4b0e-9326-251fe06ecb60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060260934s STEP: Saw pod success May 26 00:06:44.032: INFO: Pod "downwardapi-volume-d3c5f95a-3392-4b0e-9326-251fe06ecb60" satisfied condition "Succeeded or Failed" May 26 00:06:44.035: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d3c5f95a-3392-4b0e-9326-251fe06ecb60 container client-container: STEP: delete the pod May 26 00:06:44.099: INFO: Waiting for pod downwardapi-volume-d3c5f95a-3392-4b0e-9326-251fe06ecb60 to disappear May 26 00:06:44.132: INFO: Pod downwardapi-volume-d3c5f95a-3392-4b0e-9326-251fe06ecb60 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:06:44.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3752" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":97,"skipped":1723,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:06:44.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 26 00:06:44.249: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:06:44.300: INFO: Number of nodes with available pods: 0 May 26 00:06:44.300: INFO: Node latest-worker is running more than one daemon pod May 26 00:06:45.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:06:45.351: INFO: Number of nodes with available pods: 0 May 26 00:06:45.351: INFO: Node latest-worker is running more than one daemon pod May 26 00:06:46.531: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:06:46.733: INFO: Number of nodes with available pods: 0 May 26 00:06:46.734: INFO: Node latest-worker is running more than one daemon pod May 26 00:06:47.470: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:06:47.492: INFO: Number of nodes with available pods: 0 May 26 00:06:47.492: INFO: Node latest-worker is running more than one daemon pod May 26 00:06:48.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:06:48.596: INFO: Number of nodes with available pods: 0 May 26 00:06:48.596: INFO: Node latest-worker is running more than one daemon pod May 26 00:06:49.381: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:06:49.385: INFO: Number of nodes with available pods: 2 May 26 00:06:49.385: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 26 00:06:49.968: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:06:50.320: INFO: Number of nodes with available pods: 1 May 26 00:06:50.320: INFO: Node latest-worker is running more than one daemon pod May 26 00:06:51.782: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:06:51.795: INFO: Number of nodes with available pods: 1 May 26 00:06:51.795: INFO: Node latest-worker is running more than one daemon pod May 26 00:06:52.324: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:06:52.328: INFO: Number of nodes with available pods: 1 May 26 00:06:52.328: INFO: Node latest-worker is running more than one daemon pod May 26 00:06:53.326: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:06:53.330: INFO: Number of nodes with available pods: 1 May 26 00:06:53.330: INFO: Node latest-worker is running more than one daemon pod May 26 00:06:54.326: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:06:54.329: INFO: Number of nodes with available pods: 2 May 26 00:06:54.329: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-412, will wait for the garbage collector to delete the pods May 26 00:06:54.392: INFO: Deleting DaemonSet.extensions daemon-set took: 6.560429ms May 26 00:06:54.492: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.478582ms May 26 00:07:05.421: INFO: Number of nodes with available pods: 0 May 26 00:07:05.421: INFO: Number of running nodes: 0, number of available pods: 0 May 26 00:07:05.424: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-412/daemonsets","resourceVersion":"7681208"},"items":null} May 26 00:07:05.427: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-412/pods","resourceVersion":"7681208"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:07:05.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-412" for this suite. • [SLOW TEST:21.304 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":98,"skipped":1739,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:07:05.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:07:10.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6347" for this suite. • [SLOW TEST:5.276 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":99,"skipped":1777,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:07:10.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:07:10.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-772d56dd-81aa-4c0f-aa68-d94fe38d2a42" in namespace "downward-api-6137" to be "Succeeded or Failed" May 26 00:07:10.879: INFO: Pod "downwardapi-volume-772d56dd-81aa-4c0f-aa68-d94fe38d2a42": Phase="Pending", Reason="", readiness=false. Elapsed: 66.153147ms May 26 00:07:12.884: INFO: Pod "downwardapi-volume-772d56dd-81aa-4c0f-aa68-d94fe38d2a42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07104933s May 26 00:07:14.888: INFO: Pod "downwardapi-volume-772d56dd-81aa-4c0f-aa68-d94fe38d2a42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074887869s STEP: Saw pod success May 26 00:07:14.888: INFO: Pod "downwardapi-volume-772d56dd-81aa-4c0f-aa68-d94fe38d2a42" satisfied condition "Succeeded or Failed" May 26 00:07:14.891: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-772d56dd-81aa-4c0f-aa68-d94fe38d2a42 container client-container: STEP: delete the pod May 26 00:07:14.910: INFO: Waiting for pod downwardapi-volume-772d56dd-81aa-4c0f-aa68-d94fe38d2a42 to disappear May 26 00:07:14.960: INFO: Pod downwardapi-volume-772d56dd-81aa-4c0f-aa68-d94fe38d2a42 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:07:14.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6137" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":100,"skipped":1794,"failed":0} ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:07:14.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 26 00:07:19.058: INFO: &Pod{ObjectMeta:{send-events-420cbbd7-bbda-4f31-b394-97aea2392c06 events-5426 /api/v1/namespaces/events-5426/pods/send-events-420cbbd7-bbda-4f31-b394-97aea2392c06 8b817f40-4a56-430e-89d7-d99f3aebf019 7681326 0 2020-05-26 00:07:15 +0000 UTC map[name:foo time:25985317] map[] [] [] [{e2e.test Update v1 2020-05-26 00:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 00:07:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.130\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8b774,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8b774,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8b774,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:07:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:07:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.130,StartTime:2020-05-26 00:07:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 00:07:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://b2914be6c5a84ee4c5a380a5751b4e8a51ee03dc7089c14fc2f5103e27a2395c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.130,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 26 00:07:21.063: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 26 00:07:23.067: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:07:23.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5426" for this suite. • [SLOW TEST:8.166 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":101,"skipped":1794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:07:23.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:07:23.319: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 26 00:07:28.326: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 26 00:07:28.326: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 26 00:07:30.331: INFO: Creating deployment "test-rollover-deployment" May 26 00:07:30.363: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 26 00:07:32.369: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 26 00:07:32.376: INFO: Ensure that both replica sets have 1 created replica May 26 00:07:32.383: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 26 00:07:32.391: INFO: Updating deployment test-rollover-deployment May 26 00:07:32.391: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 26 00:07:34.406: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 26 00:07:34.412: INFO: Make sure deployment "test-rollover-deployment" is complete May 26 00:07:34.418: INFO: all replica sets need to contain the pod-template-hash label May 26 00:07:34.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048452, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 00:07:36.426: INFO: all replica sets need to contain the pod-template-hash label May 26 00:07:36.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048456, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 00:07:38.427: INFO: all replica sets need to contain the pod-template-hash label May 26 00:07:38.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048456, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 00:07:40.427: INFO: all replica sets need to contain the pod-template-hash label May 26 00:07:40.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048456, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 00:07:42.426: INFO: all replica sets need to contain the pod-template-hash label May 26 00:07:42.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048456, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 00:07:44.427: INFO: all replica sets need to contain the pod-template-hash label May 26 00:07:44.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048456, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726048450, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 00:07:46.547: INFO: May 26 00:07:46.547: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 26 00:07:46.556: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5616 /apis/apps/v1/namespaces/deployment-5616/deployments/test-rollover-deployment c4a8e82d-9c17-41e5-accc-e53dc201f21b 7681505 2 2020-05-26 00:07:30 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-26 00:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-26 00:07:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001f9b4b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-26 00:07:30 +0000 UTC,LastTransitionTime:2020-05-26 00:07:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-26 00:07:46 +0000 UTC,LastTransitionTime:2020-05-26 00:07:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 26 00:07:46.560: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-5616 /apis/apps/v1/namespaces/deployment-5616/replicasets/test-rollover-deployment-7c4fd9c879 89a2eef4-a4ad-4cbc-ab73-40c072788265 7681493 2 2020-05-26 00:07:32 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment c4a8e82d-9c17-41e5-accc-e53dc201f21b 0xc000bc6097 0xc000bc6098}] [] [{kube-controller-manager Update apps/v1 2020-05-26 00:07:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4a8e82d-9c17-41e5-accc-e53dc201f21b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000bc6128 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 26 00:07:46.560: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 26 00:07:46.560: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5616 /apis/apps/v1/namespaces/deployment-5616/replicasets/test-rollover-controller a17d14e8-4998-4e3b-80fc-8f7a73253d0b 7681504 2 2020-05-26 00:07:23 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment c4a8e82d-9c17-41e5-accc-e53dc201f21b 0xc0023f1d47 0xc0023f1d48}] [] [{e2e.test Update apps/v1 2020-05-26 00:07:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-26 00:07:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4a8e82d-9c17-41e5-accc-e53dc201f21b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0023f1e78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 00:07:46.560: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-5616 /apis/apps/v1/namespaces/deployment-5616/replicasets/test-rollover-deployment-5686c4cfd5 52a649e1-5ae3-4924-b983-58c25c4a36b3 7681443 2 2020-05-26 00:07:30 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment c4a8e82d-9c17-41e5-accc-e53dc201f21b 0xc0023f1f57 0xc0023f1f58}] [] [{kube-controller-manager Update apps/v1 2020-05-26 00:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4a8e82d-9c17-41e5-accc-e53dc201f21b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000bc6028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 00:07:46.563: INFO: Pod "test-rollover-deployment-7c4fd9c879-4htxz" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-4htxz test-rollover-deployment-7c4fd9c879- deployment-5616 /api/v1/namespaces/deployment-5616/pods/test-rollover-deployment-7c4fd9c879-4htxz 7dcda2cd-ce91-4eba-9e33-486dd6e32cc6 7681459 0 2020-05-26 00:07:32 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 89a2eef4-a4ad-4cbc-ab73-40c072788265 0xc000bc66f7 0xc000bc66f8}] [] [{kube-controller-manager Update v1 2020-05-26 00:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89a2eef4-a4ad-4cbc-ab73-40c072788265\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 00:07:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.128\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lhcgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lhcgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lhcgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:07:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:07:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:07:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:07:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.128,StartTime:2020-05-26 00:07:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 00:07:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://00cd795641ed52ffdbc07d85266adc292ec5bfe8efc706bef05bd09a97577249,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.128,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:07:46.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5616" for this suite. • [SLOW TEST:23.435 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":102,"skipped":1830,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:07:46.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 26 00:07:46.785: INFO: Waiting up to 1m0s for all nodes to be ready May 26 00:08:46.811: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:08:46.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 26 00:08:50.971: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:09:07.119: INFO: pods created so far: [1 1 1] May 26 00:09:07.119: INFO: length of pods created so far: 3 May 26 00:09:21.129: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:09:28.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1456" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:09:28.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5426" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:101.717 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":103,"skipped":1840,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:09:28.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0526 00:10:09.056411 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 00:10:09.056: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:10:09.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1165" for this suite. • [SLOW TEST:40.775 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":104,"skipped":1843,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:10:09.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 26 00:10:09.190: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:10:17.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7865" for this suite. • [SLOW TEST:8.790 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":105,"skipped":1866,"failed":0} [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:10:17.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-4061/secret-test-0e043214-e34e-426f-bb86-7c2dc7a86996 STEP: Creating a pod to test consume secrets May 26 00:10:18.579: INFO: Waiting up to 5m0s for pod "pod-configmaps-e3e06092-1dad-4017-a669-11762a2321d4" in namespace "secrets-4061" to be "Succeeded or Failed" May 26 00:10:18.891: INFO: Pod "pod-configmaps-e3e06092-1dad-4017-a669-11762a2321d4": Phase="Pending", Reason="", readiness=false. Elapsed: 312.023499ms May 26 00:10:20.992: INFO: Pod "pod-configmaps-e3e06092-1dad-4017-a669-11762a2321d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.412851097s May 26 00:10:23.013: INFO: Pod "pod-configmaps-e3e06092-1dad-4017-a669-11762a2321d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434115427s May 26 00:10:25.017: INFO: Pod "pod-configmaps-e3e06092-1dad-4017-a669-11762a2321d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437912401s STEP: Saw pod success May 26 00:10:25.017: INFO: Pod "pod-configmaps-e3e06092-1dad-4017-a669-11762a2321d4" satisfied condition "Succeeded or Failed" May 26 00:10:25.019: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-e3e06092-1dad-4017-a669-11762a2321d4 container env-test: STEP: delete the pod May 26 00:10:25.068: INFO: Waiting for pod pod-configmaps-e3e06092-1dad-4017-a669-11762a2321d4 to disappear May 26 00:10:25.087: INFO: Pod pod-configmaps-e3e06092-1dad-4017-a669-11762a2321d4 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:10:25.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4061" for this suite. • [SLOW TEST:7.240 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":106,"skipped":1866,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:10:25.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 26 00:10:29.225: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4386 PodName:pod-sharedvolume-6adf3afe-195d-4e43-bbc0-6eb6208e8dcc ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 00:10:29.225: INFO: >>> kubeConfig: /root/.kube/config I0526 00:10:29.255306 7 log.go:172] (0xc002a713f0) (0xc002994140) Create stream I0526 00:10:29.255336 7 log.go:172] (0xc002a713f0) (0xc002994140) Stream added, broadcasting: 1 I0526 00:10:29.257051 7 log.go:172] (0xc002a713f0) Reply frame received for 1 I0526 00:10:29.257089 7 log.go:172] (0xc002a713f0) (0xc00135a140) Create stream I0526 00:10:29.257102 7 log.go:172] (0xc002a713f0) (0xc00135a140) Stream added, broadcasting: 3 I0526 00:10:29.258389 7 log.go:172] (0xc002a713f0) Reply frame received for 3 I0526 00:10:29.258409 7 log.go:172] (0xc002a713f0) (0xc002994280) Create stream I0526 00:10:29.258419 7 log.go:172] (0xc002a713f0) (0xc002994280) Stream added, broadcasting: 5 I0526 00:10:29.259179 7 log.go:172] (0xc002a713f0) Reply frame received for 5 I0526 00:10:29.316874 7 log.go:172] (0xc002a713f0) Data frame received for 5 I0526 00:10:29.316978 7 log.go:172] (0xc002994280) (5) Data frame handling I0526 00:10:29.317016 7 log.go:172] (0xc002a713f0) Data frame received for 3 I0526 00:10:29.317032 7 log.go:172] (0xc00135a140) (3) Data frame handling I0526 00:10:29.317097 7 log.go:172] (0xc00135a140) (3) Data frame sent I0526 00:10:29.317328 7 log.go:172] (0xc002a713f0) Data frame received for 3 I0526 00:10:29.317357 7 log.go:172] (0xc00135a140) (3) Data frame handling I0526 00:10:29.318776 7 log.go:172] (0xc002a713f0) Data frame received for 1 I0526 00:10:29.318811 7 log.go:172] (0xc002994140) (1) Data frame handling I0526 00:10:29.318833 7 log.go:172] (0xc002994140) (1) Data frame sent I0526 00:10:29.318851 7 log.go:172] (0xc002a713f0) (0xc002994140) Stream removed, broadcasting: 1 I0526 00:10:29.318953 7 log.go:172] (0xc002a713f0) Go away received I0526 00:10:29.319031 7 log.go:172] (0xc002a713f0) (0xc002994140) Stream removed, broadcasting: 1 I0526 00:10:29.319077 7 log.go:172] (0xc002a713f0) (0xc00135a140) Stream removed, broadcasting: 3 I0526 00:10:29.319100 7 log.go:172] (0xc002a713f0) (0xc002994280) Stream removed, broadcasting: 5 May 26 00:10:29.319: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:10:29.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4386" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":107,"skipped":1885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:10:29.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 26 00:10:29.449: INFO: Waiting up to 5m0s for pod "pod-e12870ad-4cd2-4a0e-912e-b9e994b7eab7" in namespace "emptydir-6497" to be "Succeeded or Failed" May 26 00:10:29.464: INFO: Pod "pod-e12870ad-4cd2-4a0e-912e-b9e994b7eab7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.371013ms May 26 00:10:31.468: INFO: Pod "pod-e12870ad-4cd2-4a0e-912e-b9e994b7eab7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019161378s May 26 00:10:33.475: INFO: Pod "pod-e12870ad-4cd2-4a0e-912e-b9e994b7eab7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025749451s STEP: Saw pod success May 26 00:10:33.475: INFO: Pod "pod-e12870ad-4cd2-4a0e-912e-b9e994b7eab7" satisfied condition "Succeeded or Failed" May 26 00:10:33.478: INFO: Trying to get logs from node latest-worker2 pod pod-e12870ad-4cd2-4a0e-912e-b9e994b7eab7 container test-container: STEP: delete the pod May 26 00:10:33.566: INFO: Waiting for pod pod-e12870ad-4cd2-4a0e-912e-b9e994b7eab7 to disappear May 26 00:10:33.697: INFO: Pod pod-e12870ad-4cd2-4a0e-912e-b9e994b7eab7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:10:33.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6497" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":108,"skipped":1917,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:10:33.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9320 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-9320 May 26 00:10:33.883: INFO: Found 0 stateful pods, waiting for 1 May 26 00:10:43.888: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 26 00:10:43.914: INFO: Deleting all statefulset in ns statefulset-9320 May 26 00:10:43.947: INFO: Scaling statefulset ss to 0 May 26 00:11:04.048: INFO: Waiting for statefulset status.replicas updated to 0 May 26 00:11:04.052: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:11:04.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9320" for this suite. • [SLOW TEST:30.380 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":109,"skipped":1919,"failed":0} [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:11:04.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 26 00:11:04.163: INFO: Waiting up to 5m0s for pod "pod-09c3cbf5-78f4-4e4b-8709-e2c2a37303d2" in namespace "emptydir-1582" to be "Succeeded or Failed" May 26 00:11:04.182: INFO: Pod "pod-09c3cbf5-78f4-4e4b-8709-e2c2a37303d2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.504317ms May 26 00:11:06.322: INFO: Pod "pod-09c3cbf5-78f4-4e4b-8709-e2c2a37303d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159397968s May 26 00:11:08.327: INFO: Pod "pod-09c3cbf5-78f4-4e4b-8709-e2c2a37303d2": Phase="Running", Reason="", readiness=true. Elapsed: 4.164070156s May 26 00:11:10.331: INFO: Pod "pod-09c3cbf5-78f4-4e4b-8709-e2c2a37303d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167999192s STEP: Saw pod success May 26 00:11:10.331: INFO: Pod "pod-09c3cbf5-78f4-4e4b-8709-e2c2a37303d2" satisfied condition "Succeeded or Failed" May 26 00:11:10.336: INFO: Trying to get logs from node latest-worker2 pod pod-09c3cbf5-78f4-4e4b-8709-e2c2a37303d2 container test-container: STEP: delete the pod May 26 00:11:10.381: INFO: Waiting for pod pod-09c3cbf5-78f4-4e4b-8709-e2c2a37303d2 to disappear May 26 00:11:10.453: INFO: Pod pod-09c3cbf5-78f4-4e4b-8709-e2c2a37303d2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:11:10.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1582" for this suite. • [SLOW TEST:6.373 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":110,"skipped":1919,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:11:10.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 26 00:11:10.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7902' May 26 00:11:14.998: INFO: stderr: "" May 26 00:11:14.998: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 26 00:11:16.003: INFO: Selector matched 1 pods for map[app:agnhost] May 26 00:11:16.003: INFO: Found 0 / 1 May 26 00:11:17.003: INFO: Selector matched 1 pods for map[app:agnhost] May 26 00:11:17.003: INFO: Found 0 / 1 May 26 00:11:18.003: INFO: Selector matched 1 pods for map[app:agnhost] May 26 00:11:18.003: INFO: Found 0 / 1 May 26 00:11:19.003: INFO: Selector matched 1 pods for map[app:agnhost] May 26 00:11:19.003: INFO: Found 1 / 1 May 26 00:11:19.003: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 26 00:11:19.006: INFO: Selector matched 1 pods for map[app:agnhost] May 26 00:11:19.006: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 26 00:11:19.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-qkgbx --namespace=kubectl-7902 -p {"metadata":{"annotations":{"x":"y"}}}' May 26 00:11:19.129: INFO: stderr: "" May 26 00:11:19.129: INFO: stdout: "pod/agnhost-master-qkgbx patched\n" STEP: checking annotations May 26 00:11:19.144: INFO: Selector matched 1 pods for map[app:agnhost] May 26 00:11:19.144: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:11:19.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7902" for this suite. • [SLOW TEST:8.689 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1468 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":111,"skipped":1943,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:11:19.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:11:19.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3585" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":112,"skipped":1962,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:11:19.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:11:19.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2312" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":113,"skipped":1988,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:11:19.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:11:19.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67f126d6-f8a6-427e-972b-66031bcf1916" in namespace "projected-9759" to be "Succeeded or Failed" May 26 00:11:19.710: INFO: Pod "downwardapi-volume-67f126d6-f8a6-427e-972b-66031bcf1916": Phase="Pending", Reason="", readiness=false. Elapsed: 15.284658ms May 26 00:11:21.975: INFO: Pod "downwardapi-volume-67f126d6-f8a6-427e-972b-66031bcf1916": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28014278s May 26 00:11:23.980: INFO: Pod "downwardapi-volume-67f126d6-f8a6-427e-972b-66031bcf1916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.284751323s STEP: Saw pod success May 26 00:11:23.980: INFO: Pod "downwardapi-volume-67f126d6-f8a6-427e-972b-66031bcf1916" satisfied condition "Succeeded or Failed" May 26 00:11:23.983: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-67f126d6-f8a6-427e-972b-66031bcf1916 container client-container: STEP: delete the pod May 26 00:11:24.025: INFO: Waiting for pod downwardapi-volume-67f126d6-f8a6-427e-972b-66031bcf1916 to disappear May 26 00:11:24.034: INFO: Pod downwardapi-volume-67f126d6-f8a6-427e-972b-66031bcf1916 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:11:24.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9759" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":114,"skipped":2055,"failed":0} SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:11:24.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:11:24.147: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:11:28.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2974" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":115,"skipped":2057,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:11:28.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 26 00:11:36.473: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 00:11:36.483: INFO: Pod pod-with-poststart-http-hook still exists May 26 00:11:38.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 00:11:38.521: INFO: Pod pod-with-poststart-http-hook still exists May 26 00:11:40.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 00:11:40.486: INFO: Pod pod-with-poststart-http-hook still exists May 26 00:11:42.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 00:11:42.487: INFO: Pod pod-with-poststart-http-hook still exists May 26 00:11:44.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 00:11:44.488: INFO: Pod pod-with-poststart-http-hook still exists May 26 00:11:46.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 00:11:46.488: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:11:46.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2342" for this suite. • [SLOW TEST:18.263 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":116,"skipped":2061,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:11:46.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 26 00:11:46.615: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 00:11:46.637: INFO: Waiting for terminating namespaces to be deleted... May 26 00:11:46.640: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 26 00:11:46.646: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 26 00:11:46.646: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 26 00:11:46.646: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 26 00:11:46.646: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 26 00:11:46.646: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 00:11:46.646: INFO: Container kindnet-cni ready: true, restart count 0 May 26 00:11:46.646: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 00:11:46.646: INFO: Container kube-proxy ready: true, restart count 0 May 26 00:11:46.646: INFO: pod-logs-websocket-82e2e2ed-083f-4c02-a877-9e48237ce1de from pods-2974 started at 2020-05-26 00:11:24 +0000 UTC (1 container statuses recorded) May 26 00:11:46.646: INFO: Container main ready: true, restart count 0 May 26 00:11:46.646: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 26 00:11:46.652: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 26 00:11:46.652: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 26 00:11:46.652: INFO: pod-handle-http-request from container-lifecycle-hook-2342 started at 2020-05-26 00:11:28 +0000 UTC (1 container statuses recorded) May 26 00:11:46.652: INFO: Container pod-handle-http-request ready: true, restart count 0 May 26 00:11:46.652: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 26 00:11:46.652: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 26 00:11:46.652: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 00:11:46.652: INFO: Container kindnet-cni ready: true, restart count 0 May 26 00:11:46.652: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 00:11:46.652: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-652f52f0-158a-47a6-9586-5f583cf6a918 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-652f52f0-158a-47a6-9586-5f583cf6a918 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-652f52f0-158a-47a6-9586-5f583cf6a918 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:16:54.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7495" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.370 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":117,"skipped":2066,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:16:54.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 00:16:55.457: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 00:16:57.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049015, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049015, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049015, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049015, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 00:16:59.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049015, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049015, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049015, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049015, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 00:17:02.499: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 26 00:17:06.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-2580 to-be-attached-pod -i -c=container1' May 26 00:17:06.781: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:17:06.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2580" for this suite. STEP: Destroying namespace "webhook-2580-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.103 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":118,"skipped":2069,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:17:06.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:17:23.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3396" for this suite. • [SLOW TEST:16.445 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":119,"skipped":2072,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:17:23.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-1082 STEP: creating replication controller nodeport-test in namespace services-1082 I0526 00:17:23.624487 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-1082, replica count: 2 I0526 00:17:26.674881 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:17:29.675099 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 00:17:29.675: INFO: Creating new exec pod May 26 00:17:34.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1082 execpod4bcdj -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 26 00:17:34.987: INFO: stderr: "I0526 00:17:34.826144 1328 log.go:172] (0xc000b4b550) (0xc000b428c0) Create stream\nI0526 00:17:34.826181 1328 log.go:172] (0xc000b4b550) (0xc000b428c0) Stream added, broadcasting: 1\nI0526 00:17:34.828525 1328 log.go:172] (0xc000b4b550) Reply frame received for 1\nI0526 00:17:34.828566 1328 log.go:172] (0xc000b4b550) (0xc000250f00) Create stream\nI0526 00:17:34.828588 1328 log.go:172] (0xc000b4b550) (0xc000250f00) Stream added, broadcasting: 3\nI0526 00:17:34.829952 1328 log.go:172] (0xc000b4b550) Reply frame received for 3\nI0526 00:17:34.830008 1328 log.go:172] (0xc000b4b550) (0xc000384e60) Create stream\nI0526 00:17:34.830030 1328 log.go:172] (0xc000b4b550) (0xc000384e60) Stream added, broadcasting: 5\nI0526 00:17:34.830961 1328 log.go:172] (0xc000b4b550) Reply frame received for 5\nI0526 00:17:34.966293 1328 log.go:172] (0xc000b4b550) Data frame received for 5\nI0526 00:17:34.966317 1328 log.go:172] (0xc000384e60) (5) Data frame handling\nI0526 00:17:34.966332 1328 log.go:172] (0xc000384e60) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0526 00:17:34.980379 1328 log.go:172] (0xc000b4b550) Data frame received for 5\nI0526 00:17:34.980398 1328 log.go:172] (0xc000384e60) (5) Data frame handling\nI0526 00:17:34.980408 1328 log.go:172] (0xc000384e60) (5) Data frame sent\nI0526 00:17:34.980414 1328 log.go:172] (0xc000b4b550) Data frame received for 5\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0526 00:17:34.980422 1328 log.go:172] (0xc000384e60) (5) Data frame handling\nI0526 00:17:34.980454 1328 log.go:172] (0xc000b4b550) Data frame received for 3\nI0526 00:17:34.980461 1328 log.go:172] (0xc000250f00) (3) Data frame handling\nI0526 00:17:34.982825 1328 log.go:172] (0xc000b4b550) Data frame received for 1\nI0526 00:17:34.982848 1328 log.go:172] (0xc000b428c0) (1) Data frame handling\nI0526 00:17:34.982874 1328 log.go:172] (0xc000b428c0) (1) Data frame sent\nI0526 00:17:34.982888 1328 log.go:172] (0xc000b4b550) (0xc000b428c0) Stream removed, broadcasting: 1\nI0526 00:17:34.982903 1328 log.go:172] (0xc000b4b550) Go away received\nI0526 00:17:34.983317 1328 log.go:172] (0xc000b4b550) (0xc000b428c0) Stream removed, broadcasting: 1\nI0526 00:17:34.983344 1328 log.go:172] (0xc000b4b550) (0xc000250f00) Stream removed, broadcasting: 3\nI0526 00:17:34.983359 1328 log.go:172] (0xc000b4b550) (0xc000384e60) Stream removed, broadcasting: 5\n" May 26 00:17:34.987: INFO: stdout: "" May 26 00:17:34.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1082 execpod4bcdj -- /bin/sh -x -c nc -zv -t -w 2 10.111.138.244 80' May 26 00:17:35.180: INFO: stderr: "I0526 00:17:35.099548 1348 log.go:172] (0xc000abf3f0) (0xc000a14460) Create stream\nI0526 00:17:35.099599 1348 log.go:172] (0xc000abf3f0) (0xc000a14460) Stream added, broadcasting: 1\nI0526 00:17:35.104441 1348 log.go:172] (0xc000abf3f0) Reply frame received for 1\nI0526 00:17:35.104480 1348 log.go:172] (0xc000abf3f0) (0xc0005b2280) Create stream\nI0526 00:17:35.104491 1348 log.go:172] (0xc000abf3f0) (0xc0005b2280) Stream added, broadcasting: 3\nI0526 00:17:35.105809 1348 log.go:172] (0xc000abf3f0) Reply frame received for 3\nI0526 00:17:35.105839 1348 log.go:172] (0xc000abf3f0) (0xc00052adc0) Create stream\nI0526 00:17:35.105850 1348 log.go:172] (0xc000abf3f0) (0xc00052adc0) Stream added, broadcasting: 5\nI0526 00:17:35.106778 1348 log.go:172] (0xc000abf3f0) Reply frame received for 5\nI0526 00:17:35.172726 1348 log.go:172] (0xc000abf3f0) Data frame received for 3\nI0526 00:17:35.172775 1348 log.go:172] (0xc0005b2280) (3) Data frame handling\nI0526 00:17:35.172813 1348 log.go:172] (0xc000abf3f0) Data frame received for 5\nI0526 00:17:35.172835 1348 log.go:172] (0xc00052adc0) (5) Data frame handling\nI0526 00:17:35.172850 1348 log.go:172] (0xc00052adc0) (5) Data frame sent\nI0526 00:17:35.172862 1348 log.go:172] (0xc000abf3f0) Data frame received for 5\nI0526 00:17:35.172872 1348 log.go:172] (0xc00052adc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.138.244 80\nConnection to 10.111.138.244 80 port [tcp/http] succeeded!\nI0526 00:17:35.174957 1348 log.go:172] (0xc000abf3f0) Data frame received for 1\nI0526 00:17:35.174981 1348 log.go:172] (0xc000a14460) (1) Data frame handling\nI0526 00:17:35.174994 1348 log.go:172] (0xc000a14460) (1) Data frame sent\nI0526 00:17:35.175008 1348 log.go:172] (0xc000abf3f0) (0xc000a14460) Stream removed, broadcasting: 1\nI0526 00:17:35.175025 1348 log.go:172] (0xc000abf3f0) Go away received\nI0526 00:17:35.175484 1348 log.go:172] (0xc000abf3f0) (0xc000a14460) Stream removed, broadcasting: 1\nI0526 00:17:35.175508 1348 log.go:172] (0xc000abf3f0) (0xc0005b2280) Stream removed, broadcasting: 3\nI0526 00:17:35.175531 1348 log.go:172] (0xc000abf3f0) (0xc00052adc0) Stream removed, broadcasting: 5\n" May 26 00:17:35.180: INFO: stdout: "" May 26 00:17:35.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1082 execpod4bcdj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31419' May 26 00:17:35.409: INFO: stderr: "I0526 00:17:35.322607 1368 log.go:172] (0xc0009d73f0) (0xc000b90320) Create stream\nI0526 00:17:35.322659 1368 log.go:172] (0xc0009d73f0) (0xc000b90320) Stream added, broadcasting: 1\nI0526 00:17:35.328091 1368 log.go:172] (0xc0009d73f0) Reply frame received for 1\nI0526 00:17:35.328140 1368 log.go:172] (0xc0009d73f0) (0xc000870780) Create stream\nI0526 00:17:35.328153 1368 log.go:172] (0xc0009d73f0) (0xc000870780) Stream added, broadcasting: 3\nI0526 00:17:35.329320 1368 log.go:172] (0xc0009d73f0) Reply frame received for 3\nI0526 00:17:35.329378 1368 log.go:172] (0xc0009d73f0) (0xc0008710e0) Create stream\nI0526 00:17:35.329398 1368 log.go:172] (0xc0009d73f0) (0xc0008710e0) Stream added, broadcasting: 5\nI0526 00:17:35.330435 1368 log.go:172] (0xc0009d73f0) Reply frame received for 5\nI0526 00:17:35.402053 1368 log.go:172] (0xc0009d73f0) Data frame received for 5\nI0526 00:17:35.402103 1368 log.go:172] (0xc0008710e0) (5) Data frame handling\nI0526 00:17:35.402124 1368 log.go:172] (0xc0008710e0) (5) Data frame sent\nI0526 00:17:35.402138 1368 log.go:172] (0xc0009d73f0) Data frame received for 5\nI0526 00:17:35.402150 1368 log.go:172] (0xc0008710e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31419\nConnection to 172.17.0.13 31419 port [tcp/31419] succeeded!\nI0526 00:17:35.402212 1368 log.go:172] (0xc0009d73f0) Data frame received for 3\nI0526 00:17:35.402316 1368 log.go:172] (0xc000870780) (3) Data frame handling\nI0526 00:17:35.403505 1368 log.go:172] (0xc0009d73f0) Data frame received for 1\nI0526 00:17:35.403525 1368 log.go:172] (0xc000b90320) (1) Data frame handling\nI0526 00:17:35.403535 1368 log.go:172] (0xc000b90320) (1) Data frame sent\nI0526 00:17:35.403552 1368 log.go:172] (0xc0009d73f0) (0xc000b90320) Stream removed, broadcasting: 1\nI0526 00:17:35.403650 1368 log.go:172] (0xc0009d73f0) Go away received\nI0526 00:17:35.403857 1368 log.go:172] (0xc0009d73f0) (0xc000b90320) Stream removed, broadcasting: 1\nI0526 00:17:35.403921 1368 log.go:172] (0xc0009d73f0) (0xc000870780) Stream removed, broadcasting: 3\nI0526 00:17:35.403936 1368 log.go:172] (0xc0009d73f0) (0xc0008710e0) Stream removed, broadcasting: 5\n" May 26 00:17:35.409: INFO: stdout: "" May 26 00:17:35.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1082 execpod4bcdj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31419' May 26 00:17:35.607: INFO: stderr: "I0526 00:17:35.539526 1388 log.go:172] (0xc0006fc160) (0xc0004ecf00) Create stream\nI0526 00:17:35.539586 1388 log.go:172] (0xc0006fc160) (0xc0004ecf00) Stream added, broadcasting: 1\nI0526 00:17:35.542661 1388 log.go:172] (0xc0006fc160) Reply frame received for 1\nI0526 00:17:35.542699 1388 log.go:172] (0xc0006fc160) (0xc0000dd180) Create stream\nI0526 00:17:35.542712 1388 log.go:172] (0xc0006fc160) (0xc0000dd180) Stream added, broadcasting: 3\nI0526 00:17:35.543970 1388 log.go:172] (0xc0006fc160) Reply frame received for 3\nI0526 00:17:35.544032 1388 log.go:172] (0xc0006fc160) (0xc00013b860) Create stream\nI0526 00:17:35.544047 1388 log.go:172] (0xc0006fc160) (0xc00013b860) Stream added, broadcasting: 5\nI0526 00:17:35.545318 1388 log.go:172] (0xc0006fc160) Reply frame received for 5\nI0526 00:17:35.599421 1388 log.go:172] (0xc0006fc160) Data frame received for 3\nI0526 00:17:35.599444 1388 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0526 00:17:35.599463 1388 log.go:172] (0xc0006fc160) Data frame received for 5\nI0526 00:17:35.599470 1388 log.go:172] (0xc00013b860) (5) Data frame handling\nI0526 00:17:35.599479 1388 log.go:172] (0xc00013b860) (5) Data frame sent\nI0526 00:17:35.599487 1388 log.go:172] (0xc0006fc160) Data frame received for 5\nI0526 00:17:35.599493 1388 log.go:172] (0xc00013b860) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31419\nConnection to 172.17.0.12 31419 port [tcp/31419] succeeded!\nI0526 00:17:35.601259 1388 log.go:172] (0xc0006fc160) Data frame received for 1\nI0526 00:17:35.601290 1388 log.go:172] (0xc0004ecf00) (1) Data frame handling\nI0526 00:17:35.601313 1388 log.go:172] (0xc0004ecf00) (1) Data frame sent\nI0526 00:17:35.601502 1388 log.go:172] (0xc0006fc160) (0xc0004ecf00) Stream removed, broadcasting: 1\nI0526 00:17:35.601622 1388 log.go:172] (0xc0006fc160) Go away received\nI0526 00:17:35.601867 1388 log.go:172] (0xc0006fc160) (0xc0004ecf00) Stream removed, broadcasting: 1\nI0526 00:17:35.601881 1388 log.go:172] (0xc0006fc160) (0xc0000dd180) Stream removed, broadcasting: 3\nI0526 00:17:35.601889 1388 log.go:172] (0xc0006fc160) (0xc00013b860) Stream removed, broadcasting: 5\n" May 26 00:17:35.607: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:17:35.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1082" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.198 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":120,"skipped":2079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:17:35.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-h24r STEP: Creating a pod to test atomic-volume-subpath May 26 00:17:35.770: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h24r" in namespace "subpath-6445" to be "Succeeded or Failed" May 26 00:17:35.774: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081446ms May 26 00:17:37.815: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045579751s May 26 00:17:39.819: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Running", Reason="", readiness=true. Elapsed: 4.049559403s May 26 00:17:41.828: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Running", Reason="", readiness=true. Elapsed: 6.05859483s May 26 00:17:43.832: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Running", Reason="", readiness=true. Elapsed: 8.062836931s May 26 00:17:45.837: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Running", Reason="", readiness=true. Elapsed: 10.067727396s May 26 00:17:47.842: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Running", Reason="", readiness=true. Elapsed: 12.072662765s May 26 00:17:49.846: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Running", Reason="", readiness=true. Elapsed: 14.076373365s May 26 00:17:51.850: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Running", Reason="", readiness=true. Elapsed: 16.080814725s May 26 00:17:53.855: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Running", Reason="", readiness=true. Elapsed: 18.085137557s May 26 00:17:55.859: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Running", Reason="", readiness=true. Elapsed: 20.089566593s May 26 00:17:57.863: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Running", Reason="", readiness=true. Elapsed: 22.093705382s May 26 00:17:59.867: INFO: Pod "pod-subpath-test-configmap-h24r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.097306347s STEP: Saw pod success May 26 00:17:59.867: INFO: Pod "pod-subpath-test-configmap-h24r" satisfied condition "Succeeded or Failed" May 26 00:17:59.869: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-h24r container test-container-subpath-configmap-h24r: STEP: delete the pod May 26 00:17:59.916: INFO: Waiting for pod pod-subpath-test-configmap-h24r to disappear May 26 00:17:59.931: INFO: Pod pod-subpath-test-configmap-h24r no longer exists STEP: Deleting pod pod-subpath-test-configmap-h24r May 26 00:17:59.931: INFO: Deleting pod "pod-subpath-test-configmap-h24r" in namespace "subpath-6445" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:17:59.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6445" for this suite. • [SLOW TEST:24.327 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":121,"skipped":2115,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:17:59.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 00:18:00.593: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 00:18:02.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049080, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049080, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049080, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049080, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 00:18:05.720: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:18:05.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5330-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:18:07.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3518" for this suite. STEP: Destroying namespace "webhook-3518-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.146 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":122,"skipped":2124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:18:07.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 26 00:18:07.174: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 00:18:07.200: INFO: Waiting for terminating namespaces to be deleted... May 26 00:18:07.203: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 26 00:18:07.207: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 26 00:18:07.207: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 26 00:18:07.207: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 26 00:18:07.207: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 26 00:18:07.207: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 00:18:07.207: INFO: Container kindnet-cni ready: true, restart count 0 May 26 00:18:07.207: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 00:18:07.207: INFO: Container kube-proxy ready: true, restart count 0 May 26 00:18:07.207: INFO: sample-webhook-deployment-75dd644756-ctnh4 from webhook-3518 started at 2020-05-26 00:18:00 +0000 UTC (1 container statuses recorded) May 26 00:18:07.207: INFO: Container sample-webhook ready: true, restart count 0 May 26 00:18:07.207: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 26 00:18:07.212: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 26 00:18:07.212: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 26 00:18:07.212: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 26 00:18:07.212: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 26 00:18:07.212: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 00:18:07.212: INFO: Container kindnet-cni ready: true, restart count 0 May 26 00:18:07.212: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 00:18:07.212: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fe18e962-abf9-4353-8aeb-e6da5cf36140 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-fe18e962-abf9-4353-8aeb-e6da5cf36140 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-fe18e962-abf9-4353-8aeb-e6da5cf36140 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:18:23.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6370" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.513 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":123,"skipped":2155,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:18:23.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:18:23.655: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 26 00:18:26.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5028 create -f -' May 26 00:18:33.597: INFO: stderr: "" May 26 00:18:33.597: INFO: stdout: "e2e-test-crd-publish-openapi-7415-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 26 00:18:33.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5028 delete e2e-test-crd-publish-openapi-7415-crds test-cr' May 26 00:18:33.748: INFO: stderr: "" May 26 00:18:33.748: INFO: stdout: "e2e-test-crd-publish-openapi-7415-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 26 00:18:33.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5028 apply -f -' May 26 00:18:36.816: INFO: stderr: "" May 26 00:18:36.817: INFO: stdout: "e2e-test-crd-publish-openapi-7415-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 26 00:18:36.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5028 delete e2e-test-crd-publish-openapi-7415-crds test-cr' May 26 00:18:36.925: INFO: stderr: "" May 26 00:18:36.925: INFO: stdout: "e2e-test-crd-publish-openapi-7415-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 26 00:18:36.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7415-crds' May 26 00:18:38.107: INFO: stderr: "" May 26 00:18:38.107: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7415-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:18:41.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5028" for this suite. • [SLOW TEST:17.415 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":124,"skipped":2161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:18:41.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 00:18:41.723: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 00:18:43.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049121, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049121, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049121, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049121, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 00:18:46.783: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:18:46.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-643-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:18:47.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2824" for this suite. STEP: Destroying namespace "webhook-2824-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.141 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":125,"skipped":2184,"failed":0} [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:18:48.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 26 00:18:48.280: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:18:48.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4186" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":126,"skipped":2184,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:18:48.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 26 00:18:52.929: INFO: Pod pod-hostip-bd41e9a2-9ae1-4e27-90a1-318bde05ec8d has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:18:52.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6308" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":127,"skipped":2202,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:18:52.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:18:53.091: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:18:54.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1510" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":128,"skipped":2206,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:18:54.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4112 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4112 STEP: Creating statefulset with conflicting port in namespace statefulset-4112 STEP: Waiting until pod test-pod will start running in namespace statefulset-4112 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4112 May 26 00:19:00.393: INFO: Observed stateful pod in namespace: statefulset-4112, name: ss-0, uid: e516944e-a31c-41bc-bde1-51b763cde562, status phase: Pending. Waiting for statefulset controller to delete. May 26 00:19:00.944: INFO: Observed stateful pod in namespace: statefulset-4112, name: ss-0, uid: e516944e-a31c-41bc-bde1-51b763cde562, status phase: Failed. Waiting for statefulset controller to delete. May 26 00:19:00.971: INFO: Observed stateful pod in namespace: statefulset-4112, name: ss-0, uid: e516944e-a31c-41bc-bde1-51b763cde562, status phase: Failed. Waiting for statefulset controller to delete. May 26 00:19:00.986: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4112 STEP: Removing pod with conflicting port in namespace statefulset-4112 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4112 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 26 00:19:05.119: INFO: Deleting all statefulset in ns statefulset-4112 May 26 00:19:05.123: INFO: Scaling statefulset ss to 0 May 26 00:19:15.415: INFO: Waiting for statefulset status.replicas updated to 0 May 26 00:19:15.418: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:19:15.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4112" for this suite. • [SLOW TEST:21.358 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":129,"skipped":2226,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:19:15.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 00:19:16.568: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 26 00:19:18.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049156, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049156, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049156, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049156, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 00:19:21.626: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:19:21.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1591" for this suite. STEP: Destroying namespace "webhook-1591-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.451 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":130,"skipped":2229,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:19:21.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-7d310062-c224-43f3-a07e-79b48b587a6a STEP: Creating a pod to test consume configMaps May 26 00:19:22.099: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c9ef399-8142-4366-9e74-1108ace064c2" in namespace "configmap-6222" to be "Succeeded or Failed" May 26 00:19:22.117: INFO: Pod "pod-configmaps-0c9ef399-8142-4366-9e74-1108ace064c2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.051584ms May 26 00:19:24.121: INFO: Pod "pod-configmaps-0c9ef399-8142-4366-9e74-1108ace064c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021631842s May 26 00:19:26.125: INFO: Pod "pod-configmaps-0c9ef399-8142-4366-9e74-1108ace064c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026189842s STEP: Saw pod success May 26 00:19:26.125: INFO: Pod "pod-configmaps-0c9ef399-8142-4366-9e74-1108ace064c2" satisfied condition "Succeeded or Failed" May 26 00:19:26.128: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-0c9ef399-8142-4366-9e74-1108ace064c2 container configmap-volume-test: STEP: delete the pod May 26 00:19:26.199: INFO: Waiting for pod pod-configmaps-0c9ef399-8142-4366-9e74-1108ace064c2 to disappear May 26 00:19:26.207: INFO: Pod pod-configmaps-0c9ef399-8142-4366-9e74-1108ace064c2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:19:26.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6222" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":131,"skipped":2247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:19:26.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 26 00:19:26.340: INFO: Waiting up to 5m0s for pod "client-containers-26f0c9d8-5f15-43ad-b11e-e52aa729febc" in namespace "containers-7944" to be "Succeeded or Failed" May 26 00:19:26.368: INFO: Pod "client-containers-26f0c9d8-5f15-43ad-b11e-e52aa729febc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.06589ms May 26 00:19:28.373: INFO: Pod "client-containers-26f0c9d8-5f15-43ad-b11e-e52aa729febc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032879404s May 26 00:19:30.378: INFO: Pod "client-containers-26f0c9d8-5f15-43ad-b11e-e52aa729febc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037635575s STEP: Saw pod success May 26 00:19:30.378: INFO: Pod "client-containers-26f0c9d8-5f15-43ad-b11e-e52aa729febc" satisfied condition "Succeeded or Failed" May 26 00:19:30.381: INFO: Trying to get logs from node latest-worker pod client-containers-26f0c9d8-5f15-43ad-b11e-e52aa729febc container test-container: STEP: delete the pod May 26 00:19:30.419: INFO: Waiting for pod client-containers-26f0c9d8-5f15-43ad-b11e-e52aa729febc to disappear May 26 00:19:30.423: INFO: Pod client-containers-26f0c9d8-5f15-43ad-b11e-e52aa729febc no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:19:30.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7944" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":132,"skipped":2282,"failed":0} SSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:19:30.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 26 00:19:30.530: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 26 00:19:31.233: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 26 00:19:33.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049171, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049171, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049171, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049171, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 00:19:35.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049171, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049171, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049171, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049171, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 00:19:38.169: INFO: Waited 621.205889ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:19:38.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4060" for this suite. • [SLOW TEST:8.313 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":133,"skipped":2286,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:19:38.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8568eb61-e850-494e-bbb1-dd1128bca8d2 STEP: Creating a pod to test consume secrets May 26 00:19:39.297: INFO: Waiting up to 5m0s for pod "pod-secrets-8312a7b7-8b6a-454c-a034-24ca07e0e21a" in namespace "secrets-2588" to be "Succeeded or Failed" May 26 00:19:39.302: INFO: Pod "pod-secrets-8312a7b7-8b6a-454c-a034-24ca07e0e21a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.078774ms May 26 00:19:41.313: INFO: Pod "pod-secrets-8312a7b7-8b6a-454c-a034-24ca07e0e21a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015868865s May 26 00:19:43.329: INFO: Pod "pod-secrets-8312a7b7-8b6a-454c-a034-24ca07e0e21a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032275379s STEP: Saw pod success May 26 00:19:43.329: INFO: Pod "pod-secrets-8312a7b7-8b6a-454c-a034-24ca07e0e21a" satisfied condition "Succeeded or Failed" May 26 00:19:43.332: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-8312a7b7-8b6a-454c-a034-24ca07e0e21a container secret-volume-test: STEP: delete the pod May 26 00:19:43.373: INFO: Waiting for pod pod-secrets-8312a7b7-8b6a-454c-a034-24ca07e0e21a to disappear May 26 00:19:43.386: INFO: Pod pod-secrets-8312a7b7-8b6a-454c-a034-24ca07e0e21a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:19:43.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2588" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":134,"skipped":2303,"failed":0} ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:19:43.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-b420c236-042d-48c5-9bc4-47cd7638d904 STEP: Creating secret with name secret-projected-all-test-volume-714018d9-beee-4a2e-9b1c-71aed9cf0691 STEP: Creating a pod to test Check all projections for projected volume plugin May 26 00:19:43.496: INFO: Waiting up to 5m0s for pod "projected-volume-70776c90-8896-4efe-8e6c-69d23bc0d3f7" in namespace "projected-1435" to be "Succeeded or Failed" May 26 00:19:43.515: INFO: Pod "projected-volume-70776c90-8896-4efe-8e6c-69d23bc0d3f7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.023419ms May 26 00:19:45.522: INFO: Pod "projected-volume-70776c90-8896-4efe-8e6c-69d23bc0d3f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025605478s May 26 00:19:47.525: INFO: Pod "projected-volume-70776c90-8896-4efe-8e6c-69d23bc0d3f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029034149s STEP: Saw pod success May 26 00:19:47.525: INFO: Pod "projected-volume-70776c90-8896-4efe-8e6c-69d23bc0d3f7" satisfied condition "Succeeded or Failed" May 26 00:19:47.529: INFO: Trying to get logs from node latest-worker pod projected-volume-70776c90-8896-4efe-8e6c-69d23bc0d3f7 container projected-all-volume-test: STEP: delete the pod May 26 00:19:47.753: INFO: Waiting for pod projected-volume-70776c90-8896-4efe-8e6c-69d23bc0d3f7 to disappear May 26 00:19:47.775: INFO: Pod projected-volume-70776c90-8896-4efe-8e6c-69d23bc0d3f7 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:19:47.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1435" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2303,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:19:47.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7844 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 26 00:19:48.008: INFO: Found 0 stateful pods, waiting for 3 May 26 00:19:58.014: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 00:19:58.014: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 00:19:58.014: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 26 00:20:08.014: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 00:20:08.014: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 00:20:08.014: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 26 00:20:08.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7844 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 00:20:08.279: INFO: stderr: "I0526 00:20:08.168678 1538 log.go:172] (0xc0009bd6b0) (0xc000ade140) Create stream\nI0526 00:20:08.168760 1538 log.go:172] (0xc0009bd6b0) (0xc000ade140) Stream added, broadcasting: 1\nI0526 00:20:08.174611 1538 log.go:172] (0xc0009bd6b0) Reply frame received for 1\nI0526 00:20:08.174764 1538 log.go:172] (0xc0009bd6b0) (0xc0006dbea0) Create stream\nI0526 00:20:08.174795 1538 log.go:172] (0xc0009bd6b0) (0xc0006dbea0) Stream added, broadcasting: 3\nI0526 00:20:08.175888 1538 log.go:172] (0xc0009bd6b0) Reply frame received for 3\nI0526 00:20:08.175951 1538 log.go:172] (0xc0009bd6b0) (0xc000548500) Create stream\nI0526 00:20:08.175967 1538 log.go:172] (0xc0009bd6b0) (0xc000548500) Stream added, broadcasting: 5\nI0526 00:20:08.177003 1538 log.go:172] (0xc0009bd6b0) Reply frame received for 5\nI0526 00:20:08.238356 1538 log.go:172] (0xc0009bd6b0) Data frame received for 5\nI0526 00:20:08.238386 1538 log.go:172] (0xc000548500) (5) Data frame handling\nI0526 00:20:08.238401 1538 log.go:172] (0xc000548500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 00:20:08.269773 1538 log.go:172] (0xc0009bd6b0) Data frame received for 3\nI0526 00:20:08.269799 1538 log.go:172] (0xc0006dbea0) (3) Data frame handling\nI0526 00:20:08.269818 1538 log.go:172] (0xc0006dbea0) (3) Data frame sent\nI0526 00:20:08.270048 1538 log.go:172] (0xc0009bd6b0) Data frame received for 3\nI0526 00:20:08.270073 1538 log.go:172] (0xc0006dbea0) (3) Data frame handling\nI0526 00:20:08.270207 1538 log.go:172] (0xc0009bd6b0) Data frame received for 5\nI0526 00:20:08.270228 1538 log.go:172] (0xc000548500) (5) Data frame handling\nI0526 00:20:08.272080 1538 log.go:172] (0xc0009bd6b0) Data frame received for 1\nI0526 00:20:08.272098 1538 log.go:172] (0xc000ade140) (1) Data frame handling\nI0526 00:20:08.272112 1538 log.go:172] (0xc000ade140) (1) Data frame sent\nI0526 00:20:08.272171 1538 log.go:172] (0xc0009bd6b0) (0xc000ade140) Stream removed, broadcasting: 1\nI0526 00:20:08.272263 1538 log.go:172] (0xc0009bd6b0) Go away received\nI0526 00:20:08.272581 1538 log.go:172] (0xc0009bd6b0) (0xc000ade140) Stream removed, broadcasting: 1\nI0526 00:20:08.272605 1538 log.go:172] (0xc0009bd6b0) (0xc0006dbea0) Stream removed, broadcasting: 3\nI0526 00:20:08.272619 1538 log.go:172] (0xc0009bd6b0) (0xc000548500) Stream removed, broadcasting: 5\n" May 26 00:20:08.279: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 00:20:08.279: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 26 00:20:18.314: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 26 00:20:28.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7844 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 00:20:28.570: INFO: stderr: "I0526 00:20:28.485334 1558 log.go:172] (0xc00003abb0) (0xc0006b35e0) Create stream\nI0526 00:20:28.485398 1558 log.go:172] (0xc00003abb0) (0xc0006b35e0) Stream added, broadcasting: 1\nI0526 00:20:28.488008 1558 log.go:172] (0xc00003abb0) Reply frame received for 1\nI0526 00:20:28.488039 1558 log.go:172] (0xc00003abb0) (0xc000664640) Create stream\nI0526 00:20:28.488055 1558 log.go:172] (0xc00003abb0) (0xc000664640) Stream added, broadcasting: 3\nI0526 00:20:28.488886 1558 log.go:172] (0xc00003abb0) Reply frame received for 3\nI0526 00:20:28.488916 1558 log.go:172] (0xc00003abb0) (0xc000664f00) Create stream\nI0526 00:20:28.488936 1558 log.go:172] (0xc00003abb0) (0xc000664f00) Stream added, broadcasting: 5\nI0526 00:20:28.489888 1558 log.go:172] (0xc00003abb0) Reply frame received for 5\nI0526 00:20:28.562776 1558 log.go:172] (0xc00003abb0) Data frame received for 5\nI0526 00:20:28.562829 1558 log.go:172] (0xc000664f00) (5) Data frame handling\nI0526 00:20:28.562857 1558 log.go:172] (0xc000664f00) (5) Data frame sent\nI0526 00:20:28.562878 1558 log.go:172] (0xc00003abb0) Data frame received for 3\nI0526 00:20:28.562897 1558 log.go:172] (0xc000664640) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 00:20:28.562933 1558 log.go:172] (0xc000664640) (3) Data frame sent\nI0526 00:20:28.563012 1558 log.go:172] (0xc00003abb0) Data frame received for 3\nI0526 00:20:28.563049 1558 log.go:172] (0xc000664640) (3) Data frame handling\nI0526 00:20:28.563080 1558 log.go:172] (0xc00003abb0) Data frame received for 5\nI0526 00:20:28.563097 1558 log.go:172] (0xc000664f00) (5) Data frame handling\nI0526 00:20:28.564503 1558 log.go:172] (0xc00003abb0) Data frame received for 1\nI0526 00:20:28.564532 1558 log.go:172] (0xc0006b35e0) (1) Data frame handling\nI0526 00:20:28.564548 1558 log.go:172] (0xc0006b35e0) (1) Data frame sent\nI0526 00:20:28.564567 1558 log.go:172] (0xc00003abb0) (0xc0006b35e0) Stream removed, broadcasting: 1\nI0526 00:20:28.564629 1558 log.go:172] (0xc00003abb0) Go away received\nI0526 00:20:28.564909 1558 log.go:172] (0xc00003abb0) (0xc0006b35e0) Stream removed, broadcasting: 1\nI0526 00:20:28.564925 1558 log.go:172] (0xc00003abb0) (0xc000664640) Stream removed, broadcasting: 3\nI0526 00:20:28.564933 1558 log.go:172] (0xc00003abb0) (0xc000664f00) Stream removed, broadcasting: 5\n" May 26 00:20:28.570: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 00:20:28.570: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' STEP: Rolling back to a previous revision May 26 00:20:58.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7844 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 00:20:58.882: INFO: stderr: "I0526 00:20:58.738695 1580 log.go:172] (0xc0006fa8f0) (0xc00061ef00) Create stream\nI0526 00:20:58.738752 1580 log.go:172] (0xc0006fa8f0) (0xc00061ef00) Stream added, broadcasting: 1\nI0526 00:20:58.741532 1580 log.go:172] (0xc0006fa8f0) Reply frame received for 1\nI0526 00:20:58.741592 1580 log.go:172] (0xc0006fa8f0) (0xc00061f220) Create stream\nI0526 00:20:58.741615 1580 log.go:172] (0xc0006fa8f0) (0xc00061f220) Stream added, broadcasting: 3\nI0526 00:20:58.742964 1580 log.go:172] (0xc0006fa8f0) Reply frame received for 3\nI0526 00:20:58.743026 1580 log.go:172] (0xc0006fa8f0) (0xc0004e4c80) Create stream\nI0526 00:20:58.743052 1580 log.go:172] (0xc0006fa8f0) (0xc0004e4c80) Stream added, broadcasting: 5\nI0526 00:20:58.744007 1580 log.go:172] (0xc0006fa8f0) Reply frame received for 5\nI0526 00:20:58.829933 1580 log.go:172] (0xc0006fa8f0) Data frame received for 5\nI0526 00:20:58.829986 1580 log.go:172] (0xc0004e4c80) (5) Data frame handling\nI0526 00:20:58.830018 1580 log.go:172] (0xc0004e4c80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 00:20:58.873994 1580 log.go:172] (0xc0006fa8f0) Data frame received for 3\nI0526 00:20:58.874081 1580 log.go:172] (0xc00061f220) (3) Data frame handling\nI0526 00:20:58.874130 1580 log.go:172] (0xc00061f220) (3) Data frame sent\nI0526 00:20:58.874358 1580 log.go:172] (0xc0006fa8f0) Data frame received for 3\nI0526 00:20:58.874380 1580 log.go:172] (0xc00061f220) (3) Data frame handling\nI0526 00:20:58.874409 1580 log.go:172] (0xc0006fa8f0) Data frame received for 5\nI0526 00:20:58.874420 1580 log.go:172] (0xc0004e4c80) (5) Data frame handling\nI0526 00:20:58.876415 1580 log.go:172] (0xc0006fa8f0) Data frame received for 1\nI0526 00:20:58.876438 1580 log.go:172] (0xc00061ef00) (1) Data frame handling\nI0526 00:20:58.876463 1580 log.go:172] (0xc00061ef00) (1) Data frame sent\nI0526 00:20:58.876482 1580 log.go:172] (0xc0006fa8f0) (0xc00061ef00) Stream removed, broadcasting: 1\nI0526 00:20:58.876524 1580 log.go:172] (0xc0006fa8f0) Go away received\nI0526 00:20:58.876978 1580 log.go:172] (0xc0006fa8f0) (0xc00061ef00) Stream removed, broadcasting: 1\nI0526 00:20:58.876996 1580 log.go:172] (0xc0006fa8f0) (0xc00061f220) Stream removed, broadcasting: 3\nI0526 00:20:58.877006 1580 log.go:172] (0xc0006fa8f0) (0xc0004e4c80) Stream removed, broadcasting: 5\n" May 26 00:20:58.882: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 00:20:58.882: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 00:21:08.915: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 26 00:21:18.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7844 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 00:21:19.190: INFO: stderr: "I0526 00:21:19.107556 1602 log.go:172] (0xc00003a840) (0xc00015dd60) Create stream\nI0526 00:21:19.107630 1602 log.go:172] (0xc00003a840) (0xc00015dd60) Stream added, broadcasting: 1\nI0526 00:21:19.110554 1602 log.go:172] (0xc00003a840) Reply frame received for 1\nI0526 00:21:19.110585 1602 log.go:172] (0xc00003a840) (0xc000379b80) Create stream\nI0526 00:21:19.110593 1602 log.go:172] (0xc00003a840) (0xc000379b80) Stream added, broadcasting: 3\nI0526 00:21:19.111652 1602 log.go:172] (0xc00003a840) Reply frame received for 3\nI0526 00:21:19.111693 1602 log.go:172] (0xc00003a840) (0xc000652000) Create stream\nI0526 00:21:19.111707 1602 log.go:172] (0xc00003a840) (0xc000652000) Stream added, broadcasting: 5\nI0526 00:21:19.112612 1602 log.go:172] (0xc00003a840) Reply frame received for 5\nI0526 00:21:19.182707 1602 log.go:172] (0xc00003a840) Data frame received for 3\nI0526 00:21:19.182753 1602 log.go:172] (0xc000379b80) (3) Data frame handling\nI0526 00:21:19.182776 1602 log.go:172] (0xc000379b80) (3) Data frame sent\nI0526 00:21:19.182799 1602 log.go:172] (0xc00003a840) Data frame received for 3\nI0526 00:21:19.182814 1602 log.go:172] (0xc000379b80) (3) Data frame handling\nI0526 00:21:19.182858 1602 log.go:172] (0xc00003a840) Data frame received for 5\nI0526 00:21:19.182908 1602 log.go:172] (0xc000652000) (5) Data frame handling\nI0526 00:21:19.182938 1602 log.go:172] (0xc000652000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 00:21:19.183085 1602 log.go:172] (0xc00003a840) Data frame received for 5\nI0526 00:21:19.183108 1602 log.go:172] (0xc000652000) (5) Data frame handling\nI0526 00:21:19.184592 1602 log.go:172] (0xc00003a840) Data frame received for 1\nI0526 00:21:19.184622 1602 log.go:172] (0xc00015dd60) (1) Data frame handling\nI0526 00:21:19.184646 1602 log.go:172] (0xc00015dd60) (1) Data frame sent\nI0526 00:21:19.184677 1602 log.go:172] (0xc00003a840) (0xc00015dd60) Stream removed, broadcasting: 1\nI0526 00:21:19.184701 1602 log.go:172] (0xc00003a840) Go away received\nI0526 00:21:19.185346 1602 log.go:172] (0xc00003a840) (0xc00015dd60) Stream removed, broadcasting: 1\nI0526 00:21:19.185375 1602 log.go:172] (0xc00003a840) (0xc000379b80) Stream removed, broadcasting: 3\nI0526 00:21:19.185395 1602 log.go:172] (0xc00003a840) (0xc000652000) Stream removed, broadcasting: 5\n" May 26 00:21:19.191: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 00:21:19.191: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 26 00:21:49.213: INFO: Deleting all statefulset in ns statefulset-7844 May 26 00:21:49.216: INFO: Scaling statefulset ss2 to 0 May 26 00:22:09.252: INFO: Waiting for statefulset status.replicas updated to 0 May 26 00:22:09.254: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:22:09.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7844" for this suite. • [SLOW TEST:141.487 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":136,"skipped":2307,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:22:09.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 26 00:22:09.368: INFO: Waiting up to 5m0s for pod "pod-8d872b98-2f3e-4e69-a49b-2b4b9479280d" in namespace "emptydir-6506" to be "Succeeded or Failed" May 26 00:22:09.375: INFO: Pod "pod-8d872b98-2f3e-4e69-a49b-2b4b9479280d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.4217ms May 26 00:22:11.483: INFO: Pod "pod-8d872b98-2f3e-4e69-a49b-2b4b9479280d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115096785s May 26 00:22:13.487: INFO: Pod "pod-8d872b98-2f3e-4e69-a49b-2b4b9479280d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118783203s STEP: Saw pod success May 26 00:22:13.487: INFO: Pod "pod-8d872b98-2f3e-4e69-a49b-2b4b9479280d" satisfied condition "Succeeded or Failed" May 26 00:22:13.488: INFO: Trying to get logs from node latest-worker2 pod pod-8d872b98-2f3e-4e69-a49b-2b4b9479280d container test-container: STEP: delete the pod May 26 00:22:13.533: INFO: Waiting for pod pod-8d872b98-2f3e-4e69-a49b-2b4b9479280d to disappear May 26 00:22:13.537: INFO: Pod pod-8d872b98-2f3e-4e69-a49b-2b4b9479280d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:22:13.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6506" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":137,"skipped":2327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:22:13.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 26 00:22:20.068: INFO: 10 pods remaining May 26 00:22:20.068: INFO: 10 pods has nil DeletionTimestamp May 26 00:22:20.068: INFO: May 26 00:22:21.711: INFO: 2 pods remaining May 26 00:22:21.711: INFO: 0 pods has nil DeletionTimestamp May 26 00:22:21.711: INFO: May 26 00:22:23.512: INFO: 0 pods remaining May 26 00:22:23.512: INFO: 0 pods has nil DeletionTimestamp May 26 00:22:23.512: INFO: May 26 00:22:24.811: INFO: 0 pods remaining May 26 00:22:24.811: INFO: 0 pods has nil DeletionTimestamp May 26 00:22:24.811: INFO: STEP: Gathering metrics W0526 00:22:26.060799 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 00:22:26.060: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:22:26.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6780" for this suite. • [SLOW TEST:12.524 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":138,"skipped":2357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:22:26.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:22:26.612: INFO: Waiting up to 5m0s for pod "downwardapi-volume-381d8cd1-e84c-4495-a79a-dccf9cd40080" in namespace "projected-8792" to be "Succeeded or Failed" May 26 00:22:26.640: INFO: Pod "downwardapi-volume-381d8cd1-e84c-4495-a79a-dccf9cd40080": Phase="Pending", Reason="", readiness=false. Elapsed: 27.74305ms May 26 00:22:28.645: INFO: Pod "downwardapi-volume-381d8cd1-e84c-4495-a79a-dccf9cd40080": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032601515s May 26 00:22:30.649: INFO: Pod "downwardapi-volume-381d8cd1-e84c-4495-a79a-dccf9cd40080": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036882375s STEP: Saw pod success May 26 00:22:30.649: INFO: Pod "downwardapi-volume-381d8cd1-e84c-4495-a79a-dccf9cd40080" satisfied condition "Succeeded or Failed" May 26 00:22:30.652: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-381d8cd1-e84c-4495-a79a-dccf9cd40080 container client-container: STEP: delete the pod May 26 00:22:30.704: INFO: Waiting for pod downwardapi-volume-381d8cd1-e84c-4495-a79a-dccf9cd40080 to disappear May 26 00:22:30.730: INFO: Pod downwardapi-volume-381d8cd1-e84c-4495-a79a-dccf9cd40080 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:22:30.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8792" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":139,"skipped":2380,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:22:30.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 26 00:22:30.835: INFO: Waiting up to 5m0s for pod "downward-api-f901a38b-fde8-4fe5-952b-d2dd7d52f850" in namespace "downward-api-8972" to be "Succeeded or Failed" May 26 00:22:30.843: INFO: Pod "downward-api-f901a38b-fde8-4fe5-952b-d2dd7d52f850": Phase="Pending", Reason="", readiness=false. Elapsed: 7.903236ms May 26 00:22:32.847: INFO: Pod "downward-api-f901a38b-fde8-4fe5-952b-d2dd7d52f850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011946517s May 26 00:22:34.852: INFO: Pod "downward-api-f901a38b-fde8-4fe5-952b-d2dd7d52f850": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016486403s STEP: Saw pod success May 26 00:22:34.852: INFO: Pod "downward-api-f901a38b-fde8-4fe5-952b-d2dd7d52f850" satisfied condition "Succeeded or Failed" May 26 00:22:34.855: INFO: Trying to get logs from node latest-worker2 pod downward-api-f901a38b-fde8-4fe5-952b-d2dd7d52f850 container dapi-container: STEP: delete the pod May 26 00:22:34.920: INFO: Waiting for pod downward-api-f901a38b-fde8-4fe5-952b-d2dd7d52f850 to disappear May 26 00:22:34.929: INFO: Pod downward-api-f901a38b-fde8-4fe5-952b-d2dd7d52f850 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:22:34.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8972" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":140,"skipped":2381,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:22:34.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7023, will wait for the garbage collector to delete the pods May 26 00:22:41.066: INFO: Deleting Job.batch foo took: 6.630346ms May 26 00:22:41.366: INFO: Terminating Job.batch foo pods took: 300.282251ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:23:25.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7023" for this suite. • [SLOW TEST:50.342 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":141,"skipped":2394,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:23:25.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:23:25.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 26 00:23:25.504: INFO: stderr: "" May 26 00:23:25.504: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:23:25.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8646" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":142,"skipped":2411,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:23:25.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:23:57.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1325" for this suite. • [SLOW TEST:31.613 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":143,"skipped":2433,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:23:57.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-ac9a2f6e-92b2-424e-a543-fa0677d06961 STEP: Creating configMap with name cm-test-opt-upd-9a9b8de2-038c-4991-b1fd-b6a77371ea18 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ac9a2f6e-92b2-424e-a543-fa0677d06961 STEP: Updating configmap cm-test-opt-upd-9a9b8de2-038c-4991-b1fd-b6a77371ea18 STEP: Creating configMap with name cm-test-opt-create-82cdeb3a-ca7a-4636-a588-7b63bfd0e6c3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:24:07.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7137" for this suite. • [SLOW TEST:10.252 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":144,"skipped":2441,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:24:07.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-fxxp STEP: Creating a pod to test atomic-volume-subpath May 26 00:24:07.551: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fxxp" in namespace "subpath-5011" to be "Succeeded or Failed" May 26 00:24:07.558: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Pending", Reason="", readiness=false. Elapsed: 7.064684ms May 26 00:24:09.563: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011504415s May 26 00:24:11.567: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Running", Reason="", readiness=true. Elapsed: 4.015928369s May 26 00:24:13.572: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Running", Reason="", readiness=true. Elapsed: 6.020365392s May 26 00:24:15.576: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Running", Reason="", readiness=true. Elapsed: 8.024742319s May 26 00:24:17.580: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Running", Reason="", readiness=true. Elapsed: 10.028263608s May 26 00:24:19.584: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Running", Reason="", readiness=true. Elapsed: 12.032824422s May 26 00:24:21.589: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Running", Reason="", readiness=true. Elapsed: 14.037489779s May 26 00:24:23.593: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Running", Reason="", readiness=true. Elapsed: 16.041933998s May 26 00:24:25.598: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Running", Reason="", readiness=true. Elapsed: 18.046330057s May 26 00:24:27.601: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Running", Reason="", readiness=true. Elapsed: 20.049840313s May 26 00:24:29.605: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Running", Reason="", readiness=true. Elapsed: 22.053786876s May 26 00:24:31.610: INFO: Pod "pod-subpath-test-configmap-fxxp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058243708s STEP: Saw pod success May 26 00:24:31.610: INFO: Pod "pod-subpath-test-configmap-fxxp" satisfied condition "Succeeded or Failed" May 26 00:24:31.613: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-fxxp container test-container-subpath-configmap-fxxp: STEP: delete the pod May 26 00:24:31.773: INFO: Waiting for pod pod-subpath-test-configmap-fxxp to disappear May 26 00:24:31.853: INFO: Pod pod-subpath-test-configmap-fxxp no longer exists STEP: Deleting pod pod-subpath-test-configmap-fxxp May 26 00:24:31.853: INFO: Deleting pod "pod-subpath-test-configmap-fxxp" in namespace "subpath-5011" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:24:31.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5011" for this suite. • [SLOW TEST:24.524 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":145,"skipped":2457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:24:31.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2406 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2406 I0526 00:24:32.101731 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2406, replica count: 2 I0526 00:24:35.152617 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:24:38.152910 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 00:24:38.152: INFO: Creating new exec pod May 26 00:24:43.185: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2406 execpodbbwl8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 26 00:24:43.445: INFO: stderr: "I0526 00:24:43.324246 1642 log.go:172] (0xc000aef340) (0xc000ab65a0) Create stream\nI0526 00:24:43.324315 1642 log.go:172] (0xc000aef340) (0xc000ab65a0) Stream added, broadcasting: 1\nI0526 00:24:43.328264 1642 log.go:172] (0xc000aef340) Reply frame received for 1\nI0526 00:24:43.328304 1642 log.go:172] (0xc000aef340) (0xc00053a0a0) Create stream\nI0526 00:24:43.328314 1642 log.go:172] (0xc000aef340) (0xc00053a0a0) Stream added, broadcasting: 3\nI0526 00:24:43.329327 1642 log.go:172] (0xc000aef340) Reply frame received for 3\nI0526 00:24:43.329356 1642 log.go:172] (0xc000aef340) (0xc000508be0) Create stream\nI0526 00:24:43.329365 1642 log.go:172] (0xc000aef340) (0xc000508be0) Stream added, broadcasting: 5\nI0526 00:24:43.330147 1642 log.go:172] (0xc000aef340) Reply frame received for 5\nI0526 00:24:43.435027 1642 log.go:172] (0xc000aef340) Data frame received for 5\nI0526 00:24:43.435050 1642 log.go:172] (0xc000508be0) (5) Data frame handling\nI0526 00:24:43.435062 1642 log.go:172] (0xc000508be0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0526 00:24:43.437540 1642 log.go:172] (0xc000aef340) Data frame received for 5\nI0526 00:24:43.437566 1642 log.go:172] (0xc000508be0) (5) Data frame handling\nI0526 00:24:43.437580 1642 log.go:172] (0xc000508be0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0526 00:24:43.437794 1642 log.go:172] (0xc000aef340) Data frame received for 5\nI0526 00:24:43.437827 1642 log.go:172] (0xc000aef340) Data frame received for 3\nI0526 00:24:43.437863 1642 log.go:172] (0xc00053a0a0) (3) Data frame handling\nI0526 00:24:43.437894 1642 log.go:172] (0xc000508be0) (5) Data frame handling\nI0526 00:24:43.439633 1642 log.go:172] (0xc000aef340) Data frame received for 1\nI0526 00:24:43.439660 1642 log.go:172] (0xc000ab65a0) (1) Data frame handling\nI0526 00:24:43.439679 1642 log.go:172] (0xc000ab65a0) (1) Data frame sent\nI0526 00:24:43.439698 1642 log.go:172] (0xc000aef340) (0xc000ab65a0) Stream removed, broadcasting: 1\nI0526 00:24:43.439737 1642 log.go:172] (0xc000aef340) Go away received\nI0526 00:24:43.440141 1642 log.go:172] (0xc000aef340) (0xc000ab65a0) Stream removed, broadcasting: 1\nI0526 00:24:43.440160 1642 log.go:172] (0xc000aef340) (0xc00053a0a0) Stream removed, broadcasting: 3\nI0526 00:24:43.440169 1642 log.go:172] (0xc000aef340) (0xc000508be0) Stream removed, broadcasting: 5\n" May 26 00:24:43.445: INFO: stdout: "" May 26 00:24:43.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2406 execpodbbwl8 -- /bin/sh -x -c nc -zv -t -w 2 10.105.164.36 80' May 26 00:24:43.644: INFO: stderr: "I0526 00:24:43.564941 1662 log.go:172] (0xc000ab2dc0) (0xc0003a77c0) Create stream\nI0526 00:24:43.565002 1662 log.go:172] (0xc000ab2dc0) (0xc0003a77c0) Stream added, broadcasting: 1\nI0526 00:24:43.568219 1662 log.go:172] (0xc000ab2dc0) Reply frame received for 1\nI0526 00:24:43.568270 1662 log.go:172] (0xc000ab2dc0) (0xc00015e140) Create stream\nI0526 00:24:43.568295 1662 log.go:172] (0xc000ab2dc0) (0xc00015e140) Stream added, broadcasting: 3\nI0526 00:24:43.569458 1662 log.go:172] (0xc000ab2dc0) Reply frame received for 3\nI0526 00:24:43.569509 1662 log.go:172] (0xc000ab2dc0) (0xc0003a7e00) Create stream\nI0526 00:24:43.569526 1662 log.go:172] (0xc000ab2dc0) (0xc0003a7e00) Stream added, broadcasting: 5\nI0526 00:24:43.570433 1662 log.go:172] (0xc000ab2dc0) Reply frame received for 5\nI0526 00:24:43.637584 1662 log.go:172] (0xc000ab2dc0) Data frame received for 3\nI0526 00:24:43.637630 1662 log.go:172] (0xc00015e140) (3) Data frame handling\nI0526 00:24:43.637660 1662 log.go:172] (0xc000ab2dc0) Data frame received for 5\nI0526 00:24:43.637672 1662 log.go:172] (0xc0003a7e00) (5) Data frame handling\nI0526 00:24:43.637685 1662 log.go:172] (0xc0003a7e00) (5) Data frame sent\nI0526 00:24:43.637708 1662 log.go:172] (0xc000ab2dc0) Data frame received for 5\nI0526 00:24:43.637724 1662 log.go:172] (0xc0003a7e00) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.164.36 80\nConnection to 10.105.164.36 80 port [tcp/http] succeeded!\nI0526 00:24:43.638910 1662 log.go:172] (0xc000ab2dc0) Data frame received for 1\nI0526 00:24:43.638925 1662 log.go:172] (0xc0003a77c0) (1) Data frame handling\nI0526 00:24:43.638931 1662 log.go:172] (0xc0003a77c0) (1) Data frame sent\nI0526 00:24:43.638947 1662 log.go:172] (0xc000ab2dc0) (0xc0003a77c0) Stream removed, broadcasting: 1\nI0526 00:24:43.639001 1662 log.go:172] (0xc000ab2dc0) Go away received\nI0526 00:24:43.639229 1662 log.go:172] (0xc000ab2dc0) (0xc0003a77c0) Stream removed, broadcasting: 1\nI0526 00:24:43.639240 1662 log.go:172] (0xc000ab2dc0) (0xc00015e140) Stream removed, broadcasting: 3\nI0526 00:24:43.639245 1662 log.go:172] (0xc000ab2dc0) (0xc0003a7e00) Stream removed, broadcasting: 5\n" May 26 00:24:43.644: INFO: stdout: "" May 26 00:24:43.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2406 execpodbbwl8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32735' May 26 00:24:43.848: INFO: stderr: "I0526 00:24:43.767318 1682 log.go:172] (0xc00094b1e0) (0xc000aec500) Create stream\nI0526 00:24:43.767364 1682 log.go:172] (0xc00094b1e0) (0xc000aec500) Stream added, broadcasting: 1\nI0526 00:24:43.773335 1682 log.go:172] (0xc00094b1e0) Reply frame received for 1\nI0526 00:24:43.773374 1682 log.go:172] (0xc00094b1e0) (0xc000694780) Create stream\nI0526 00:24:43.773389 1682 log.go:172] (0xc00094b1e0) (0xc000694780) Stream added, broadcasting: 3\nI0526 00:24:43.774242 1682 log.go:172] (0xc00094b1e0) Reply frame received for 3\nI0526 00:24:43.774281 1682 log.go:172] (0xc00094b1e0) (0xc00055c780) Create stream\nI0526 00:24:43.774292 1682 log.go:172] (0xc00094b1e0) (0xc00055c780) Stream added, broadcasting: 5\nI0526 00:24:43.775188 1682 log.go:172] (0xc00094b1e0) Reply frame received for 5\nI0526 00:24:43.842711 1682 log.go:172] (0xc00094b1e0) Data frame received for 3\nI0526 00:24:43.842742 1682 log.go:172] (0xc000694780) (3) Data frame handling\nI0526 00:24:43.842760 1682 log.go:172] (0xc00094b1e0) Data frame received for 5\nI0526 00:24:43.842766 1682 log.go:172] (0xc00055c780) (5) Data frame handling\nI0526 00:24:43.842772 1682 log.go:172] (0xc00055c780) (5) Data frame sent\nI0526 00:24:43.842777 1682 log.go:172] (0xc00094b1e0) Data frame received for 5\nI0526 00:24:43.842783 1682 log.go:172] (0xc00055c780) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32735\nConnection to 172.17.0.13 32735 port [tcp/32735] succeeded!\nI0526 00:24:43.843693 1682 log.go:172] (0xc00094b1e0) Data frame received for 1\nI0526 00:24:43.843739 1682 log.go:172] (0xc000aec500) (1) Data frame handling\nI0526 00:24:43.843774 1682 log.go:172] (0xc000aec500) (1) Data frame sent\nI0526 00:24:43.843805 1682 log.go:172] (0xc00094b1e0) (0xc000aec500) Stream removed, broadcasting: 1\nI0526 00:24:43.843834 1682 log.go:172] (0xc00094b1e0) Go away received\nI0526 00:24:43.844140 1682 log.go:172] (0xc00094b1e0) (0xc000aec500) Stream removed, broadcasting: 1\nI0526 00:24:43.844153 1682 log.go:172] (0xc00094b1e0) (0xc000694780) Stream removed, broadcasting: 3\nI0526 00:24:43.844160 1682 log.go:172] (0xc00094b1e0) (0xc00055c780) Stream removed, broadcasting: 5\n" May 26 00:24:43.849: INFO: stdout: "" May 26 00:24:43.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2406 execpodbbwl8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32735' May 26 00:24:44.081: INFO: stderr: "I0526 00:24:43.998203 1703 log.go:172] (0xc000b9d4a0) (0xc000b46500) Create stream\nI0526 00:24:43.998265 1703 log.go:172] (0xc000b9d4a0) (0xc000b46500) Stream added, broadcasting: 1\nI0526 00:24:44.003735 1703 log.go:172] (0xc000b9d4a0) Reply frame received for 1\nI0526 00:24:44.003794 1703 log.go:172] (0xc000b9d4a0) (0xc00084cbe0) Create stream\nI0526 00:24:44.003812 1703 log.go:172] (0xc000b9d4a0) (0xc00084cbe0) Stream added, broadcasting: 3\nI0526 00:24:44.004812 1703 log.go:172] (0xc000b9d4a0) Reply frame received for 3\nI0526 00:24:44.004846 1703 log.go:172] (0xc000b9d4a0) (0xc00084db80) Create stream\nI0526 00:24:44.004854 1703 log.go:172] (0xc000b9d4a0) (0xc00084db80) Stream added, broadcasting: 5\nI0526 00:24:44.006022 1703 log.go:172] (0xc000b9d4a0) Reply frame received for 5\nI0526 00:24:44.074520 1703 log.go:172] (0xc000b9d4a0) Data frame received for 5\nI0526 00:24:44.074581 1703 log.go:172] (0xc00084db80) (5) Data frame handling\nI0526 00:24:44.074608 1703 log.go:172] (0xc00084db80) (5) Data frame sent\nI0526 00:24:44.074627 1703 log.go:172] (0xc000b9d4a0) Data frame received for 5\nI0526 00:24:44.074646 1703 log.go:172] (0xc00084db80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32735\nConnection to 172.17.0.12 32735 port [tcp/32735] succeeded!\nI0526 00:24:44.074718 1703 log.go:172] (0xc000b9d4a0) Data frame received for 3\nI0526 00:24:44.074776 1703 log.go:172] (0xc00084cbe0) (3) Data frame handling\nI0526 00:24:44.075934 1703 log.go:172] (0xc000b9d4a0) Data frame received for 1\nI0526 00:24:44.075946 1703 log.go:172] (0xc000b46500) (1) Data frame handling\nI0526 00:24:44.075953 1703 log.go:172] (0xc000b46500) (1) Data frame sent\nI0526 00:24:44.075961 1703 log.go:172] (0xc000b9d4a0) (0xc000b46500) Stream removed, broadcasting: 1\nI0526 00:24:44.076205 1703 log.go:172] (0xc000b9d4a0) (0xc000b46500) Stream removed, broadcasting: 1\nI0526 00:24:44.076218 1703 log.go:172] (0xc000b9d4a0) (0xc00084cbe0) Stream removed, broadcasting: 3\nI0526 00:24:44.076290 1703 log.go:172] (0xc000b9d4a0) Go away received\nI0526 00:24:44.076355 1703 log.go:172] (0xc000b9d4a0) (0xc00084db80) Stream removed, broadcasting: 5\n" May 26 00:24:44.082: INFO: stdout: "" May 26 00:24:44.082: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:24:44.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2406" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.226 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":146,"skipped":2488,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:24:44.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 00:24:44.928: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 00:24:46.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049484, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049484, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049485, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049484, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 00:24:49.982: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:24:50.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1685" for this suite. STEP: Destroying namespace "webhook-1685-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.849 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":147,"skipped":2536,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:24:50.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-139b93f4-1971-4777-82a6-d8dca705fd49 STEP: Creating secret with name s-test-opt-upd-15e14035-33d8-4e69-81cd-1f0662b66b43 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-139b93f4-1971-4777-82a6-d8dca705fd49 STEP: Updating secret s-test-opt-upd-15e14035-33d8-4e69-81cd-1f0662b66b43 STEP: Creating secret with name s-test-opt-create-9e9461cb-3c55-46e6-a045-79fd0177d1b0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:24:59.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7498" for this suite. • [SLOW TEST:8.474 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":148,"skipped":2542,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:24:59.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-7bf2687a-73a4-4ae6-a9dd-3f08252a0dc6 STEP: updating the pod May 26 00:25:08.109: INFO: Successfully updated pod "var-expansion-7bf2687a-73a4-4ae6-a9dd-3f08252a0dc6" STEP: waiting for pod and container restart STEP: Failing liveness probe May 26 00:25:08.154: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-3942 PodName:var-expansion-7bf2687a-73a4-4ae6-a9dd-3f08252a0dc6 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 00:25:08.154: INFO: >>> kubeConfig: /root/.kube/config I0526 00:25:08.179867 7 log.go:172] (0xc002a71290) (0xc0020246e0) Create stream I0526 00:25:08.179897 7 log.go:172] (0xc002a71290) (0xc0020246e0) Stream added, broadcasting: 1 I0526 00:25:08.181670 7 log.go:172] (0xc002a71290) Reply frame received for 1 I0526 00:25:08.181716 7 log.go:172] (0xc002a71290) (0xc0025a5680) Create stream I0526 00:25:08.181732 7 log.go:172] (0xc002a71290) (0xc0025a5680) Stream added, broadcasting: 3 I0526 00:25:08.182586 7 log.go:172] (0xc002a71290) Reply frame received for 3 I0526 00:25:08.182635 7 log.go:172] (0xc002a71290) (0xc0025a5720) Create stream I0526 00:25:08.182649 7 log.go:172] (0xc002a71290) (0xc0025a5720) Stream added, broadcasting: 5 I0526 00:25:08.183496 7 log.go:172] (0xc002a71290) Reply frame received for 5 I0526 00:25:08.258885 7 log.go:172] (0xc002a71290) Data frame received for 5 I0526 00:25:08.258939 7 log.go:172] (0xc0025a5720) (5) Data frame handling I0526 00:25:08.258979 7 log.go:172] (0xc002a71290) Data frame received for 3 I0526 00:25:08.259012 7 log.go:172] (0xc0025a5680) (3) Data frame handling I0526 00:25:08.260436 7 log.go:172] (0xc002a71290) Data frame received for 1 I0526 00:25:08.260454 7 log.go:172] (0xc0020246e0) (1) Data frame handling I0526 00:25:08.260468 7 log.go:172] (0xc0020246e0) (1) Data frame sent I0526 00:25:08.260482 7 log.go:172] (0xc002a71290) (0xc0020246e0) Stream removed, broadcasting: 1 I0526 00:25:08.260497 7 log.go:172] (0xc002a71290) Go away received I0526 00:25:08.260589 7 log.go:172] (0xc002a71290) (0xc0020246e0) Stream removed, broadcasting: 1 I0526 00:25:08.260613 7 log.go:172] (0xc002a71290) (0xc0025a5680) Stream removed, broadcasting: 3 I0526 00:25:08.260630 7 log.go:172] (0xc002a71290) (0xc0025a5720) Stream removed, broadcasting: 5 May 26 00:25:08.260: INFO: Pod exec output: / STEP: Waiting for container to restart May 26 00:25:08.287: INFO: Container dapi-container, restarts: 0 May 26 00:25:18.290: INFO: Container dapi-container, restarts: 0 May 26 00:25:28.292: INFO: Container dapi-container, restarts: 0 May 26 00:25:38.292: INFO: Container dapi-container, restarts: 0 May 26 00:25:48.292: INFO: Container dapi-container, restarts: 1 May 26 00:25:48.292: INFO: Container has restart count: 1 STEP: Rewriting the file May 26 00:25:48.292: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-3942 PodName:var-expansion-7bf2687a-73a4-4ae6-a9dd-3f08252a0dc6 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 00:25:48.292: INFO: >>> kubeConfig: /root/.kube/config I0526 00:25:48.329863 7 log.go:172] (0xc002c2c000) (0xc001baeaa0) Create stream I0526 00:25:48.329900 7 log.go:172] (0xc002c2c000) (0xc001baeaa0) Stream added, broadcasting: 1 I0526 00:25:48.331905 7 log.go:172] (0xc002c2c000) Reply frame received for 1 I0526 00:25:48.331960 7 log.go:172] (0xc002c2c000) (0xc001baebe0) Create stream I0526 00:25:48.331977 7 log.go:172] (0xc002c2c000) (0xc001baebe0) Stream added, broadcasting: 3 I0526 00:25:48.333647 7 log.go:172] (0xc002c2c000) Reply frame received for 3 I0526 00:25:48.333721 7 log.go:172] (0xc002c2c000) (0xc00176a000) Create stream I0526 00:25:48.333753 7 log.go:172] (0xc002c2c000) (0xc00176a000) Stream added, broadcasting: 5 I0526 00:25:48.334917 7 log.go:172] (0xc002c2c000) Reply frame received for 5 I0526 00:25:48.394922 7 log.go:172] (0xc002c2c000) Data frame received for 3 I0526 00:25:48.394955 7 log.go:172] (0xc001baebe0) (3) Data frame handling I0526 00:25:48.395056 7 log.go:172] (0xc002c2c000) Data frame received for 5 I0526 00:25:48.395073 7 log.go:172] (0xc00176a000) (5) Data frame handling I0526 00:25:48.396610 7 log.go:172] (0xc002c2c000) Data frame received for 1 I0526 00:25:48.396634 7 log.go:172] (0xc001baeaa0) (1) Data frame handling I0526 00:25:48.396651 7 log.go:172] (0xc001baeaa0) (1) Data frame sent I0526 00:25:48.396674 7 log.go:172] (0xc002c2c000) (0xc001baeaa0) Stream removed, broadcasting: 1 I0526 00:25:48.396802 7 log.go:172] (0xc002c2c000) (0xc001baeaa0) Stream removed, broadcasting: 1 I0526 00:25:48.396824 7 log.go:172] (0xc002c2c000) (0xc001baebe0) Stream removed, broadcasting: 3 I0526 00:25:48.396943 7 log.go:172] (0xc002c2c000) Go away received I0526 00:25:48.397000 7 log.go:172] (0xc002c2c000) (0xc00176a000) Stream removed, broadcasting: 5 May 26 00:25:48.397: INFO: Exec stderr: "" May 26 00:25:48.397: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 26 00:26:16.418: INFO: Container has restart count: 2 May 26 00:27:18.425: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 26 00:27:18.428: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-3942 PodName:var-expansion-7bf2687a-73a4-4ae6-a9dd-3f08252a0dc6 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 00:27:18.428: INFO: >>> kubeConfig: /root/.kube/config I0526 00:27:18.464446 7 log.go:172] (0xc002a71130) (0xc002587040) Create stream I0526 00:27:18.464482 7 log.go:172] (0xc002a71130) (0xc002587040) Stream added, broadcasting: 1 I0526 00:27:18.466387 7 log.go:172] (0xc002a71130) Reply frame received for 1 I0526 00:27:18.466443 7 log.go:172] (0xc002a71130) (0xc0018099a0) Create stream I0526 00:27:18.466458 7 log.go:172] (0xc002a71130) (0xc0018099a0) Stream added, broadcasting: 3 I0526 00:27:18.467252 7 log.go:172] (0xc002a71130) Reply frame received for 3 I0526 00:27:18.467289 7 log.go:172] (0xc002a71130) (0xc001809a40) Create stream I0526 00:27:18.467305 7 log.go:172] (0xc002a71130) (0xc001809a40) Stream added, broadcasting: 5 I0526 00:27:18.468150 7 log.go:172] (0xc002a71130) Reply frame received for 5 I0526 00:27:18.519091 7 log.go:172] (0xc002a71130) Data frame received for 5 I0526 00:27:18.519138 7 log.go:172] (0xc002a71130) Data frame received for 3 I0526 00:27:18.519183 7 log.go:172] (0xc0018099a0) (3) Data frame handling I0526 00:27:18.519214 7 log.go:172] (0xc001809a40) (5) Data frame handling I0526 00:27:18.520580 7 log.go:172] (0xc002a71130) Data frame received for 1 I0526 00:27:18.520613 7 log.go:172] (0xc002587040) (1) Data frame handling I0526 00:27:18.520639 7 log.go:172] (0xc002587040) (1) Data frame sent I0526 00:27:18.520675 7 log.go:172] (0xc002a71130) (0xc002587040) Stream removed, broadcasting: 1 I0526 00:27:18.520813 7 log.go:172] (0xc002a71130) (0xc002587040) Stream removed, broadcasting: 1 I0526 00:27:18.520842 7 log.go:172] (0xc002a71130) (0xc0018099a0) Stream removed, broadcasting: 3 I0526 00:27:18.520854 7 log.go:172] (0xc002a71130) (0xc001809a40) Stream removed, broadcasting: 5 I0526 00:27:18.520992 7 log.go:172] (0xc002a71130) Go away received May 26 00:27:18.524: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-3942 PodName:var-expansion-7bf2687a-73a4-4ae6-a9dd-3f08252a0dc6 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 00:27:18.525: INFO: >>> kubeConfig: /root/.kube/config I0526 00:27:18.559792 7 log.go:172] (0xc002fe2370) (0xc002994820) Create stream I0526 00:27:18.559819 7 log.go:172] (0xc002fe2370) (0xc002994820) Stream added, broadcasting: 1 I0526 00:27:18.561798 7 log.go:172] (0xc002fe2370) Reply frame received for 1 I0526 00:27:18.561843 7 log.go:172] (0xc002fe2370) (0xc002a025a0) Create stream I0526 00:27:18.561864 7 log.go:172] (0xc002fe2370) (0xc002a025a0) Stream added, broadcasting: 3 I0526 00:27:18.563001 7 log.go:172] (0xc002fe2370) Reply frame received for 3 I0526 00:27:18.563084 7 log.go:172] (0xc002fe2370) (0xc0029948c0) Create stream I0526 00:27:18.563099 7 log.go:172] (0xc002fe2370) (0xc0029948c0) Stream added, broadcasting: 5 I0526 00:27:18.563904 7 log.go:172] (0xc002fe2370) Reply frame received for 5 I0526 00:27:18.626346 7 log.go:172] (0xc002fe2370) Data frame received for 5 I0526 00:27:18.626383 7 log.go:172] (0xc0029948c0) (5) Data frame handling I0526 00:27:18.626408 7 log.go:172] (0xc002fe2370) Data frame received for 3 I0526 00:27:18.626434 7 log.go:172] (0xc002a025a0) (3) Data frame handling I0526 00:27:18.628084 7 log.go:172] (0xc002fe2370) Data frame received for 1 I0526 00:27:18.628116 7 log.go:172] (0xc002994820) (1) Data frame handling I0526 00:27:18.628138 7 log.go:172] (0xc002994820) (1) Data frame sent I0526 00:27:18.628167 7 log.go:172] (0xc002fe2370) (0xc002994820) Stream removed, broadcasting: 1 I0526 00:27:18.628206 7 log.go:172] (0xc002fe2370) Go away received I0526 00:27:18.628355 7 log.go:172] (0xc002fe2370) (0xc002994820) Stream removed, broadcasting: 1 I0526 00:27:18.628440 7 log.go:172] (0xc002fe2370) (0xc002a025a0) Stream removed, broadcasting: 3 I0526 00:27:18.628472 7 log.go:172] (0xc002fe2370) (0xc0029948c0) Stream removed, broadcasting: 5 May 26 00:27:18.628: INFO: Deleting pod "var-expansion-7bf2687a-73a4-4ae6-a9dd-3f08252a0dc6" in namespace "var-expansion-3942" May 26 00:27:18.634: INFO: Wait up to 5m0s for pod "var-expansion-7bf2687a-73a4-4ae6-a9dd-3f08252a0dc6" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:27:56.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3942" for this suite. • [SLOW TEST:177.210 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":149,"skipped":2546,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:27:56.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:27:56.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7534" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":150,"skipped":2558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:27:56.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 26 00:27:56.824: INFO: >>> kubeConfig: /root/.kube/config May 26 00:27:58.781: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:28:10.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8323" for this suite. • [SLOW TEST:13.754 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":151,"skipped":2582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:28:10.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 26 00:28:18.665: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 00:28:18.671: INFO: Pod pod-with-prestop-http-hook still exists May 26 00:28:20.671: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 00:28:20.676: INFO: Pod pod-with-prestop-http-hook still exists May 26 00:28:22.671: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 00:28:22.675: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:28:22.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2930" for this suite. • [SLOW TEST:12.215 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":152,"skipped":2626,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:28:22.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 26 00:28:22.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8753' May 26 00:28:22.874: INFO: stderr: "" May 26 00:28:22.874: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 26 00:28:22.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8753' May 26 00:28:35.247: INFO: stderr: "" May 26 00:28:35.247: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:28:35.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8753" for this suite. • [SLOW TEST:12.547 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":153,"skipped":2642,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:28:35.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 26 00:28:35.321: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 00:28:35.331: INFO: Waiting for terminating namespaces to be deleted... May 26 00:28:35.334: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 26 00:28:35.339: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 26 00:28:35.339: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 26 00:28:35.339: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 26 00:28:35.339: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 26 00:28:35.339: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 00:28:35.339: INFO: Container kindnet-cni ready: true, restart count 0 May 26 00:28:35.339: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 00:28:35.339: INFO: Container kube-proxy ready: true, restart count 0 May 26 00:28:35.339: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 26 00:28:35.344: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 26 00:28:35.344: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 26 00:28:35.344: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 26 00:28:35.344: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 26 00:28:35.344: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 00:28:35.344: INFO: Container kindnet-cni ready: true, restart count 0 May 26 00:28:35.344: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 00:28:35.344: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 26 00:28:35.403: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 26 00:28:35.403: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 May 26 00:28:35.403: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 26 00:28:35.403: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 26 00:28:35.403: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 26 00:28:35.403: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 26 00:28:35.403: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 26 00:28:35.410: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1c24858e-29c5-4c1e-bdf5-c84685786b8d.16126c80011bfb01], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1688/filler-pod-1c24858e-29c5-4c1e-bdf5-c84685786b8d to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-1c24858e-29c5-4c1e-bdf5-c84685786b8d.16126c804d953ba9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1c24858e-29c5-4c1e-bdf5-c84685786b8d.16126c80b6ff8855], Reason = [Created], Message = [Created container filler-pod-1c24858e-29c5-4c1e-bdf5-c84685786b8d] STEP: Considering event: Type = [Normal], Name = [filler-pod-1c24858e-29c5-4c1e-bdf5-c84685786b8d.16126c80d1e35829], Reason = [Started], Message = [Started container filler-pod-1c24858e-29c5-4c1e-bdf5-c84685786b8d] STEP: Considering event: Type = [Normal], Name = [filler-pod-f802f1f9-72d7-4647-8b4c-a5039b6687f4.16126c800115d349], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1688/filler-pod-f802f1f9-72d7-4647-8b4c-a5039b6687f4 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-f802f1f9-72d7-4647-8b4c-a5039b6687f4.16126c808d8759ad], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f802f1f9-72d7-4647-8b4c-a5039b6687f4.16126c80da127566], Reason = [Created], Message = [Created container filler-pod-f802f1f9-72d7-4647-8b4c-a5039b6687f4] STEP: Considering event: Type = [Normal], Name = [filler-pod-f802f1f9-72d7-4647-8b4c-a5039b6687f4.16126c80e97f49da], Reason = [Started], Message = [Started container filler-pod-f802f1f9-72d7-4647-8b4c-a5039b6687f4] STEP: Considering event: Type = [Warning], Name = [additional-pod.16126c816a567e77], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.16126c816c7b28a5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:28:42.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1688" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.334 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":154,"skipped":2651,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:28:42.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:28:53.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9566" for this suite. • [SLOW TEST:11.125 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":155,"skipped":2672,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:28:53.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:28:53.789: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:28:54.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7917" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":156,"skipped":2699,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:28:54.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-384f26e7-b025-4a5e-9717-d3be578037ac STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:29:00.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2165" for this suite. • [SLOW TEST:6.234 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":157,"skipped":2707,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:29:00.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 26 00:29:00.722: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:29:16.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2989" for this suite. • [SLOW TEST:16.157 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":158,"skipped":2727,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:29:16.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0526 00:29:17.989527 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 00:29:17.989: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:29:17.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4456" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":159,"skipped":2740,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:29:18.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:29:18.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4553" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":160,"skipped":2759,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:29:18.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:29:18.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e58e1250-e999-4f85-8a02-f33c645991b3" in namespace "downward-api-8262" to be "Succeeded or Failed" May 26 00:29:18.870: INFO: Pod "downwardapi-volume-e58e1250-e999-4f85-8a02-f33c645991b3": Phase="Pending", Reason="", readiness=false. Elapsed: 182.873094ms May 26 00:29:20.876: INFO: Pod "downwardapi-volume-e58e1250-e999-4f85-8a02-f33c645991b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188804805s May 26 00:29:22.880: INFO: Pod "downwardapi-volume-e58e1250-e999-4f85-8a02-f33c645991b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193492072s May 26 00:29:24.884: INFO: Pod "downwardapi-volume-e58e1250-e999-4f85-8a02-f33c645991b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.197536551s STEP: Saw pod success May 26 00:29:24.884: INFO: Pod "downwardapi-volume-e58e1250-e999-4f85-8a02-f33c645991b3" satisfied condition "Succeeded or Failed" May 26 00:29:24.887: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e58e1250-e999-4f85-8a02-f33c645991b3 container client-container: STEP: delete the pod May 26 00:29:24.958: INFO: Waiting for pod downwardapi-volume-e58e1250-e999-4f85-8a02-f33c645991b3 to disappear May 26 00:29:24.968: INFO: Pod downwardapi-volume-e58e1250-e999-4f85-8a02-f33c645991b3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:29:24.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8262" for this suite. • [SLOW TEST:6.575 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":161,"skipped":2761,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:29:24.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5765 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5765 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5765 May 26 00:29:25.124: INFO: Found 0 stateful pods, waiting for 1 May 26 00:29:35.128: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 26 00:29:35.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5765 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 00:29:38.079: INFO: stderr: "I0526 00:29:37.920224 1766 log.go:172] (0xc000d7e000) (0xc0006fcbe0) Create stream\nI0526 00:29:37.920283 1766 log.go:172] (0xc000d7e000) (0xc0006fcbe0) Stream added, broadcasting: 1\nI0526 00:29:37.924713 1766 log.go:172] (0xc000d7e000) Reply frame received for 1\nI0526 00:29:37.924759 1766 log.go:172] (0xc000d7e000) (0xc0006fdb80) Create stream\nI0526 00:29:37.924770 1766 log.go:172] (0xc000d7e000) (0xc0006fdb80) Stream added, broadcasting: 3\nI0526 00:29:37.930196 1766 log.go:172] (0xc000d7e000) Reply frame received for 3\nI0526 00:29:37.930247 1766 log.go:172] (0xc000d7e000) (0xc0006b2460) Create stream\nI0526 00:29:37.930257 1766 log.go:172] (0xc000d7e000) (0xc0006b2460) Stream added, broadcasting: 5\nI0526 00:29:37.931109 1766 log.go:172] (0xc000d7e000) Reply frame received for 5\nI0526 00:29:38.013424 1766 log.go:172] (0xc000d7e000) Data frame received for 5\nI0526 00:29:38.013451 1766 log.go:172] (0xc0006b2460) (5) Data frame handling\nI0526 00:29:38.013466 1766 log.go:172] (0xc0006b2460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 00:29:38.069670 1766 log.go:172] (0xc000d7e000) Data frame received for 3\nI0526 00:29:38.069711 1766 log.go:172] (0xc0006fdb80) (3) Data frame handling\nI0526 00:29:38.069752 1766 log.go:172] (0xc0006fdb80) (3) Data frame sent\nI0526 00:29:38.069963 1766 log.go:172] (0xc000d7e000) Data frame received for 3\nI0526 00:29:38.070002 1766 log.go:172] (0xc0006fdb80) (3) Data frame handling\nI0526 00:29:38.070164 1766 log.go:172] (0xc000d7e000) Data frame received for 5\nI0526 00:29:38.070203 1766 log.go:172] (0xc0006b2460) (5) Data frame handling\nI0526 00:29:38.072401 1766 log.go:172] (0xc000d7e000) Data frame received for 1\nI0526 00:29:38.072431 1766 log.go:172] (0xc0006fcbe0) (1) Data frame handling\nI0526 00:29:38.072632 1766 log.go:172] (0xc0006fcbe0) (1) Data frame sent\nI0526 00:29:38.072670 1766 log.go:172] (0xc000d7e000) (0xc0006fcbe0) Stream removed, broadcasting: 1\nI0526 00:29:38.072705 1766 log.go:172] (0xc000d7e000) Go away received\nI0526 00:29:38.073415 1766 log.go:172] (0xc000d7e000) (0xc0006fcbe0) Stream removed, broadcasting: 1\nI0526 00:29:38.073448 1766 log.go:172] (0xc000d7e000) (0xc0006fdb80) Stream removed, broadcasting: 3\nI0526 00:29:38.073469 1766 log.go:172] (0xc000d7e000) (0xc0006b2460) Stream removed, broadcasting: 5\n" May 26 00:29:38.079: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 00:29:38.079: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 00:29:38.084: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 26 00:29:48.089: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 26 00:29:48.089: INFO: Waiting for statefulset status.replicas updated to 0 May 26 00:29:48.121: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999645s May 26 00:29:49.126: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.976901169s May 26 00:29:50.131: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.971799607s May 26 00:29:51.136: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.967157285s May 26 00:29:52.140: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.962487823s May 26 00:29:53.145: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.957520961s May 26 00:29:54.149: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.953228978s May 26 00:29:55.153: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.949130922s May 26 00:29:56.161: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.94477126s May 26 00:29:57.165: INFO: Verifying statefulset ss doesn't scale past 1 for another 937.156543ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5765 May 26 00:29:58.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5765 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 00:29:58.416: INFO: stderr: "I0526 00:29:58.329568 1799 log.go:172] (0xc00003a4d0) (0xc00042cdc0) Create stream\nI0526 00:29:58.329633 1799 log.go:172] (0xc00003a4d0) (0xc00042cdc0) Stream added, broadcasting: 1\nI0526 00:29:58.331470 1799 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0526 00:29:58.331561 1799 log.go:172] (0xc00003a4d0) (0xc0001517c0) Create stream\nI0526 00:29:58.331593 1799 log.go:172] (0xc00003a4d0) (0xc0001517c0) Stream added, broadcasting: 3\nI0526 00:29:58.332673 1799 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0526 00:29:58.332732 1799 log.go:172] (0xc00003a4d0) (0xc00024e0a0) Create stream\nI0526 00:29:58.332750 1799 log.go:172] (0xc00003a4d0) (0xc00024e0a0) Stream added, broadcasting: 5\nI0526 00:29:58.333990 1799 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0526 00:29:58.408899 1799 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0526 00:29:58.408959 1799 log.go:172] (0xc0001517c0) (3) Data frame handling\nI0526 00:29:58.408991 1799 log.go:172] (0xc0001517c0) (3) Data frame sent\nI0526 00:29:58.409020 1799 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0526 00:29:58.409045 1799 log.go:172] (0xc0001517c0) (3) Data frame handling\nI0526 00:29:58.409368 1799 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0526 00:29:58.409405 1799 log.go:172] (0xc00024e0a0) (5) Data frame handling\nI0526 00:29:58.409419 1799 log.go:172] (0xc00024e0a0) (5) Data frame sent\nI0526 00:29:58.409431 1799 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0526 00:29:58.409442 1799 log.go:172] (0xc00024e0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 00:29:58.411530 1799 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0526 00:29:58.411565 1799 log.go:172] (0xc00042cdc0) (1) Data frame handling\nI0526 00:29:58.411583 1799 log.go:172] (0xc00042cdc0) (1) Data frame sent\nI0526 00:29:58.411615 1799 log.go:172] (0xc00003a4d0) (0xc00042cdc0) Stream removed, broadcasting: 1\nI0526 00:29:58.411666 1799 log.go:172] (0xc00003a4d0) Go away received\nI0526 00:29:58.411989 1799 log.go:172] (0xc00003a4d0) (0xc00042cdc0) Stream removed, broadcasting: 1\nI0526 00:29:58.412011 1799 log.go:172] (0xc00003a4d0) (0xc0001517c0) Stream removed, broadcasting: 3\nI0526 00:29:58.412021 1799 log.go:172] (0xc00003a4d0) (0xc00024e0a0) Stream removed, broadcasting: 5\n" May 26 00:29:58.417: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 00:29:58.417: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 00:29:58.438: INFO: Found 1 stateful pods, waiting for 3 May 26 00:30:08.444: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 26 00:30:08.444: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 26 00:30:08.444: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 26 00:30:08.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5765 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 00:30:08.688: INFO: stderr: "I0526 00:30:08.593444 1820 log.go:172] (0xc00003a420) (0xc0004a2e60) Create stream\nI0526 00:30:08.593529 1820 log.go:172] (0xc00003a420) (0xc0004a2e60) Stream added, broadcasting: 1\nI0526 00:30:08.595388 1820 log.go:172] (0xc00003a420) Reply frame received for 1\nI0526 00:30:08.595439 1820 log.go:172] (0xc00003a420) (0xc00026f540) Create stream\nI0526 00:30:08.595455 1820 log.go:172] (0xc00003a420) (0xc00026f540) Stream added, broadcasting: 3\nI0526 00:30:08.596378 1820 log.go:172] (0xc00003a420) Reply frame received for 3\nI0526 00:30:08.596415 1820 log.go:172] (0xc00003a420) (0xc0007246e0) Create stream\nI0526 00:30:08.596425 1820 log.go:172] (0xc00003a420) (0xc0007246e0) Stream added, broadcasting: 5\nI0526 00:30:08.597552 1820 log.go:172] (0xc00003a420) Reply frame received for 5\nI0526 00:30:08.682334 1820 log.go:172] (0xc00003a420) Data frame received for 3\nI0526 00:30:08.682358 1820 log.go:172] (0xc00026f540) (3) Data frame handling\nI0526 00:30:08.682365 1820 log.go:172] (0xc00026f540) (3) Data frame sent\nI0526 00:30:08.682370 1820 log.go:172] (0xc00003a420) Data frame received for 3\nI0526 00:30:08.682375 1820 log.go:172] (0xc00026f540) (3) Data frame handling\nI0526 00:30:08.682414 1820 log.go:172] (0xc00003a420) Data frame received for 5\nI0526 00:30:08.682440 1820 log.go:172] (0xc0007246e0) (5) Data frame handling\nI0526 00:30:08.682516 1820 log.go:172] (0xc0007246e0) (5) Data frame sent\nI0526 00:30:08.682538 1820 log.go:172] (0xc00003a420) Data frame received for 5\nI0526 00:30:08.682556 1820 log.go:172] (0xc0007246e0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 00:30:08.683890 1820 log.go:172] (0xc00003a420) Data frame received for 1\nI0526 00:30:08.683938 1820 log.go:172] (0xc0004a2e60) (1) Data frame handling\nI0526 00:30:08.683967 1820 log.go:172] (0xc0004a2e60) (1) Data frame sent\nI0526 00:30:08.683995 1820 log.go:172] (0xc00003a420) (0xc0004a2e60) Stream removed, broadcasting: 1\nI0526 00:30:08.684064 1820 log.go:172] (0xc00003a420) Go away received\nI0526 00:30:08.684521 1820 log.go:172] (0xc00003a420) (0xc0004a2e60) Stream removed, broadcasting: 1\nI0526 00:30:08.684552 1820 log.go:172] (0xc00003a420) (0xc00026f540) Stream removed, broadcasting: 3\nI0526 00:30:08.684569 1820 log.go:172] (0xc00003a420) (0xc0007246e0) Stream removed, broadcasting: 5\n" May 26 00:30:08.689: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 00:30:08.689: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 00:30:08.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5765 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 00:30:08.937: INFO: stderr: "I0526 00:30:08.823761 1841 log.go:172] (0xc000a97600) (0xc00096a500) Create stream\nI0526 00:30:08.823963 1841 log.go:172] (0xc000a97600) (0xc00096a500) Stream added, broadcasting: 1\nI0526 00:30:08.827309 1841 log.go:172] (0xc000a97600) Reply frame received for 1\nI0526 00:30:08.827347 1841 log.go:172] (0xc000a97600) (0xc000252a00) Create stream\nI0526 00:30:08.827355 1841 log.go:172] (0xc000a97600) (0xc000252a00) Stream added, broadcasting: 3\nI0526 00:30:08.828304 1841 log.go:172] (0xc000a97600) Reply frame received for 3\nI0526 00:30:08.828335 1841 log.go:172] (0xc000a97600) (0xc00054cc80) Create stream\nI0526 00:30:08.828353 1841 log.go:172] (0xc000a97600) (0xc00054cc80) Stream added, broadcasting: 5\nI0526 00:30:08.829398 1841 log.go:172] (0xc000a97600) Reply frame received for 5\nI0526 00:30:08.899508 1841 log.go:172] (0xc000a97600) Data frame received for 5\nI0526 00:30:08.899533 1841 log.go:172] (0xc00054cc80) (5) Data frame handling\nI0526 00:30:08.899548 1841 log.go:172] (0xc00054cc80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 00:30:08.928165 1841 log.go:172] (0xc000a97600) Data frame received for 3\nI0526 00:30:08.928204 1841 log.go:172] (0xc000252a00) (3) Data frame handling\nI0526 00:30:08.928224 1841 log.go:172] (0xc000252a00) (3) Data frame sent\nI0526 00:30:08.928872 1841 log.go:172] (0xc000a97600) Data frame received for 5\nI0526 00:30:08.928916 1841 log.go:172] (0xc00054cc80) (5) Data frame handling\nI0526 00:30:08.929364 1841 log.go:172] (0xc000a97600) Data frame received for 3\nI0526 00:30:08.929389 1841 log.go:172] (0xc000252a00) (3) Data frame handling\nI0526 00:30:08.930947 1841 log.go:172] (0xc000a97600) Data frame received for 1\nI0526 00:30:08.930971 1841 log.go:172] (0xc00096a500) (1) Data frame handling\nI0526 00:30:08.930982 1841 log.go:172] (0xc00096a500) (1) Data frame sent\nI0526 00:30:08.930996 1841 log.go:172] (0xc000a97600) (0xc00096a500) Stream removed, broadcasting: 1\nI0526 00:30:08.931081 1841 log.go:172] (0xc000a97600) Go away received\nI0526 00:30:08.931307 1841 log.go:172] (0xc000a97600) (0xc00096a500) Stream removed, broadcasting: 1\nI0526 00:30:08.931325 1841 log.go:172] (0xc000a97600) (0xc000252a00) Stream removed, broadcasting: 3\nI0526 00:30:08.931334 1841 log.go:172] (0xc000a97600) (0xc00054cc80) Stream removed, broadcasting: 5\n" May 26 00:30:08.937: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 00:30:08.937: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 00:30:08.937: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5765 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 00:30:09.202: INFO: stderr: "I0526 00:30:09.080336 1862 log.go:172] (0xc000aeb340) (0xc00099e320) Create stream\nI0526 00:30:09.080390 1862 log.go:172] (0xc000aeb340) (0xc00099e320) Stream added, broadcasting: 1\nI0526 00:30:09.086173 1862 log.go:172] (0xc000aeb340) Reply frame received for 1\nI0526 00:30:09.086252 1862 log.go:172] (0xc000aeb340) (0xc000852f00) Create stream\nI0526 00:30:09.086277 1862 log.go:172] (0xc000aeb340) (0xc000852f00) Stream added, broadcasting: 3\nI0526 00:30:09.087317 1862 log.go:172] (0xc000aeb340) Reply frame received for 3\nI0526 00:30:09.087362 1862 log.go:172] (0xc000aeb340) (0xc000738d20) Create stream\nI0526 00:30:09.087375 1862 log.go:172] (0xc000aeb340) (0xc000738d20) Stream added, broadcasting: 5\nI0526 00:30:09.088533 1862 log.go:172] (0xc000aeb340) Reply frame received for 5\nI0526 00:30:09.156184 1862 log.go:172] (0xc000aeb340) Data frame received for 5\nI0526 00:30:09.156318 1862 log.go:172] (0xc000738d20) (5) Data frame handling\nI0526 00:30:09.156379 1862 log.go:172] (0xc000738d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 00:30:09.192132 1862 log.go:172] (0xc000aeb340) Data frame received for 3\nI0526 00:30:09.192170 1862 log.go:172] (0xc000852f00) (3) Data frame handling\nI0526 00:30:09.192204 1862 log.go:172] (0xc000852f00) (3) Data frame sent\nI0526 00:30:09.192223 1862 log.go:172] (0xc000aeb340) Data frame received for 3\nI0526 00:30:09.192249 1862 log.go:172] (0xc000852f00) (3) Data frame handling\nI0526 00:30:09.192357 1862 log.go:172] (0xc000aeb340) Data frame received for 5\nI0526 00:30:09.192387 1862 log.go:172] (0xc000738d20) (5) Data frame handling\nI0526 00:30:09.195045 1862 log.go:172] (0xc000aeb340) Data frame received for 1\nI0526 00:30:09.195069 1862 log.go:172] (0xc00099e320) (1) Data frame handling\nI0526 00:30:09.195083 1862 log.go:172] (0xc00099e320) (1) Data frame sent\nI0526 00:30:09.195276 1862 log.go:172] (0xc000aeb340) (0xc00099e320) Stream removed, broadcasting: 1\nI0526 00:30:09.195565 1862 log.go:172] (0xc000aeb340) Go away received\nI0526 00:30:09.195773 1862 log.go:172] (0xc000aeb340) (0xc00099e320) Stream removed, broadcasting: 1\nI0526 00:30:09.195804 1862 log.go:172] (0xc000aeb340) (0xc000852f00) Stream removed, broadcasting: 3\nI0526 00:30:09.195824 1862 log.go:172] (0xc000aeb340) (0xc000738d20) Stream removed, broadcasting: 5\n" May 26 00:30:09.202: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 00:30:09.202: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 00:30:09.202: INFO: Waiting for statefulset status.replicas updated to 0 May 26 00:30:09.219: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 26 00:30:19.227: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 26 00:30:19.227: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 26 00:30:19.227: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 26 00:30:19.239: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999436s May 26 00:30:20.245: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994082684s May 26 00:30:21.250: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988759703s May 26 00:30:22.256: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983073463s May 26 00:30:23.261: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977435065s May 26 00:30:24.278: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972516614s May 26 00:30:25.283: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.955748452s May 26 00:30:26.301: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.95072379s May 26 00:30:27.307: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.931944872s May 26 00:30:28.313: INFO: Verifying statefulset ss doesn't scale past 3 for another 926.244741ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5765 May 26 00:30:29.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5765 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 00:30:29.554: INFO: stderr: "I0526 00:30:29.458494 1883 log.go:172] (0xc0000e0840) (0xc0000eadc0) Create stream\nI0526 00:30:29.458546 1883 log.go:172] (0xc0000e0840) (0xc0000eadc0) Stream added, broadcasting: 1\nI0526 00:30:29.461435 1883 log.go:172] (0xc0000e0840) Reply frame received for 1\nI0526 00:30:29.461474 1883 log.go:172] (0xc0000e0840) (0xc00014f7c0) Create stream\nI0526 00:30:29.461486 1883 log.go:172] (0xc0000e0840) (0xc00014f7c0) Stream added, broadcasting: 3\nI0526 00:30:29.463040 1883 log.go:172] (0xc0000e0840) Reply frame received for 3\nI0526 00:30:29.463113 1883 log.go:172] (0xc0000e0840) (0xc0005541e0) Create stream\nI0526 00:30:29.463141 1883 log.go:172] (0xc0000e0840) (0xc0005541e0) Stream added, broadcasting: 5\nI0526 00:30:29.464309 1883 log.go:172] (0xc0000e0840) Reply frame received for 5\nI0526 00:30:29.546486 1883 log.go:172] (0xc0000e0840) Data frame received for 3\nI0526 00:30:29.546528 1883 log.go:172] (0xc00014f7c0) (3) Data frame handling\nI0526 00:30:29.546542 1883 log.go:172] (0xc00014f7c0) (3) Data frame sent\nI0526 00:30:29.546560 1883 log.go:172] (0xc0000e0840) Data frame received for 3\nI0526 00:30:29.546570 1883 log.go:172] (0xc00014f7c0) (3) Data frame handling\nI0526 00:30:29.546620 1883 log.go:172] (0xc0000e0840) Data frame received for 5\nI0526 00:30:29.546674 1883 log.go:172] (0xc0005541e0) (5) Data frame handling\nI0526 00:30:29.546693 1883 log.go:172] (0xc0005541e0) (5) Data frame sent\nI0526 00:30:29.546709 1883 log.go:172] (0xc0000e0840) Data frame received for 5\nI0526 00:30:29.546723 1883 log.go:172] (0xc0005541e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 00:30:29.548481 1883 log.go:172] (0xc0000e0840) Data frame received for 1\nI0526 00:30:29.548503 1883 log.go:172] (0xc0000eadc0) (1) Data frame handling\nI0526 00:30:29.548520 1883 log.go:172] (0xc0000eadc0) (1) Data frame sent\nI0526 00:30:29.548534 1883 log.go:172] (0xc0000e0840) (0xc0000eadc0) Stream removed, broadcasting: 1\nI0526 00:30:29.548550 1883 log.go:172] (0xc0000e0840) Go away received\nI0526 00:30:29.549054 1883 log.go:172] (0xc0000e0840) (0xc0000eadc0) Stream removed, broadcasting: 1\nI0526 00:30:29.549074 1883 log.go:172] (0xc0000e0840) (0xc00014f7c0) Stream removed, broadcasting: 3\nI0526 00:30:29.549086 1883 log.go:172] (0xc0000e0840) (0xc0005541e0) Stream removed, broadcasting: 5\n" May 26 00:30:29.554: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 00:30:29.554: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 00:30:29.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5765 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 00:30:29.757: INFO: stderr: "I0526 00:30:29.697811 1904 log.go:172] (0xc000c5ee70) (0xc00033c820) Create stream\nI0526 00:30:29.697877 1904 log.go:172] (0xc000c5ee70) (0xc00033c820) Stream added, broadcasting: 1\nI0526 00:30:29.700628 1904 log.go:172] (0xc000c5ee70) Reply frame received for 1\nI0526 00:30:29.700673 1904 log.go:172] (0xc000c5ee70) (0xc00012e0a0) Create stream\nI0526 00:30:29.700688 1904 log.go:172] (0xc000c5ee70) (0xc00012e0a0) Stream added, broadcasting: 3\nI0526 00:30:29.702204 1904 log.go:172] (0xc000c5ee70) Reply frame received for 3\nI0526 00:30:29.702259 1904 log.go:172] (0xc000c5ee70) (0xc00033ce60) Create stream\nI0526 00:30:29.702281 1904 log.go:172] (0xc000c5ee70) (0xc00033ce60) Stream added, broadcasting: 5\nI0526 00:30:29.703414 1904 log.go:172] (0xc000c5ee70) Reply frame received for 5\nI0526 00:30:29.750687 1904 log.go:172] (0xc000c5ee70) Data frame received for 3\nI0526 00:30:29.750901 1904 log.go:172] (0xc00012e0a0) (3) Data frame handling\nI0526 00:30:29.750923 1904 log.go:172] (0xc00012e0a0) (3) Data frame sent\nI0526 00:30:29.750933 1904 log.go:172] (0xc000c5ee70) Data frame received for 3\nI0526 00:30:29.750941 1904 log.go:172] (0xc00012e0a0) (3) Data frame handling\nI0526 00:30:29.750970 1904 log.go:172] (0xc000c5ee70) Data frame received for 5\nI0526 00:30:29.750980 1904 log.go:172] (0xc00033ce60) (5) Data frame handling\nI0526 00:30:29.750990 1904 log.go:172] (0xc00033ce60) (5) Data frame sent\nI0526 00:30:29.750999 1904 log.go:172] (0xc000c5ee70) Data frame received for 5\nI0526 00:30:29.751007 1904 log.go:172] (0xc00033ce60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 00:30:29.752438 1904 log.go:172] (0xc000c5ee70) Data frame received for 1\nI0526 00:30:29.752468 1904 log.go:172] (0xc00033c820) (1) Data frame handling\nI0526 00:30:29.752490 1904 log.go:172] (0xc00033c820) (1) Data frame sent\nI0526 00:30:29.752508 1904 log.go:172] (0xc000c5ee70) (0xc00033c820) Stream removed, broadcasting: 1\nI0526 00:30:29.752529 1904 log.go:172] (0xc000c5ee70) Go away received\nI0526 00:30:29.753024 1904 log.go:172] (0xc000c5ee70) (0xc00033c820) Stream removed, broadcasting: 1\nI0526 00:30:29.753049 1904 log.go:172] (0xc000c5ee70) (0xc00012e0a0) Stream removed, broadcasting: 3\nI0526 00:30:29.753070 1904 log.go:172] (0xc000c5ee70) (0xc00033ce60) Stream removed, broadcasting: 5\n" May 26 00:30:29.758: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 00:30:29.758: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 00:30:29.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5765 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 00:30:29.981: INFO: stderr: "I0526 00:30:29.906786 1924 log.go:172] (0xc0006a48f0) (0xc000371a40) Create stream\nI0526 00:30:29.906881 1924 log.go:172] (0xc0006a48f0) (0xc000371a40) Stream added, broadcasting: 1\nI0526 00:30:29.909595 1924 log.go:172] (0xc0006a48f0) Reply frame received for 1\nI0526 00:30:29.909641 1924 log.go:172] (0xc0006a48f0) (0xc0000dce60) Create stream\nI0526 00:30:29.909651 1924 log.go:172] (0xc0006a48f0) (0xc0000dce60) Stream added, broadcasting: 3\nI0526 00:30:29.910733 1924 log.go:172] (0xc0006a48f0) Reply frame received for 3\nI0526 00:30:29.910767 1924 log.go:172] (0xc0006a48f0) (0xc00013b720) Create stream\nI0526 00:30:29.910781 1924 log.go:172] (0xc0006a48f0) (0xc00013b720) Stream added, broadcasting: 5\nI0526 00:30:29.911653 1924 log.go:172] (0xc0006a48f0) Reply frame received for 5\nI0526 00:30:29.973458 1924 log.go:172] (0xc0006a48f0) Data frame received for 3\nI0526 00:30:29.973493 1924 log.go:172] (0xc0000dce60) (3) Data frame handling\nI0526 00:30:29.973522 1924 log.go:172] (0xc0000dce60) (3) Data frame sent\nI0526 00:30:29.973538 1924 log.go:172] (0xc0006a48f0) Data frame received for 3\nI0526 00:30:29.973548 1924 log.go:172] (0xc0000dce60) (3) Data frame handling\nI0526 00:30:29.973567 1924 log.go:172] (0xc0006a48f0) Data frame received for 5\nI0526 00:30:29.973577 1924 log.go:172] (0xc00013b720) (5) Data frame handling\nI0526 00:30:29.973588 1924 log.go:172] (0xc00013b720) (5) Data frame sent\nI0526 00:30:29.973603 1924 log.go:172] (0xc0006a48f0) Data frame received for 5\nI0526 00:30:29.973616 1924 log.go:172] (0xc00013b720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 00:30:29.975423 1924 log.go:172] (0xc0006a48f0) Data frame received for 1\nI0526 00:30:29.975451 1924 log.go:172] (0xc000371a40) (1) Data frame handling\nI0526 00:30:29.975472 1924 log.go:172] (0xc000371a40) (1) Data frame sent\nI0526 00:30:29.975487 1924 log.go:172] (0xc0006a48f0) (0xc000371a40) Stream removed, broadcasting: 1\nI0526 00:30:29.975505 1924 log.go:172] (0xc0006a48f0) Go away received\nI0526 00:30:29.975814 1924 log.go:172] (0xc0006a48f0) (0xc000371a40) Stream removed, broadcasting: 1\nI0526 00:30:29.975831 1924 log.go:172] (0xc0006a48f0) (0xc0000dce60) Stream removed, broadcasting: 3\nI0526 00:30:29.975839 1924 log.go:172] (0xc0006a48f0) (0xc00013b720) Stream removed, broadcasting: 5\n" May 26 00:30:29.981: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 00:30:29.981: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 00:30:29.981: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 26 00:30:49.999: INFO: Deleting all statefulset in ns statefulset-5765 May 26 00:30:50.003: INFO: Scaling statefulset ss to 0 May 26 00:30:50.014: INFO: Waiting for statefulset status.replicas updated to 0 May 26 00:30:50.016: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:30:50.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5765" for this suite. • [SLOW TEST:85.158 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":162,"skipped":2783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:30:50.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 26 00:30:54.333: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:30:54.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3449" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":163,"skipped":2834,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:30:54.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 26 00:30:54.679: INFO: Waiting up to 5m0s for pod "pod-4c293213-0377-4247-830e-3db219baf86d" in namespace "emptydir-7088" to be "Succeeded or Failed" May 26 00:30:54.683: INFO: Pod "pod-4c293213-0377-4247-830e-3db219baf86d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.787556ms May 26 00:30:56.750: INFO: Pod "pod-4c293213-0377-4247-830e-3db219baf86d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071601467s May 26 00:30:58.755: INFO: Pod "pod-4c293213-0377-4247-830e-3db219baf86d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076104158s STEP: Saw pod success May 26 00:30:58.755: INFO: Pod "pod-4c293213-0377-4247-830e-3db219baf86d" satisfied condition "Succeeded or Failed" May 26 00:30:58.759: INFO: Trying to get logs from node latest-worker2 pod pod-4c293213-0377-4247-830e-3db219baf86d container test-container: STEP: delete the pod May 26 00:30:58.820: INFO: Waiting for pod pod-4c293213-0377-4247-830e-3db219baf86d to disappear May 26 00:30:58.833: INFO: Pod pod-4c293213-0377-4247-830e-3db219baf86d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:30:58.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7088" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":164,"skipped":2847,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:30:58.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 26 00:30:58.933: INFO: Waiting up to 5m0s for pod "downward-api-ab303910-7e6b-4f6b-8205-616e4ba1d06b" in namespace "downward-api-7986" to be "Succeeded or Failed" May 26 00:30:58.996: INFO: Pod "downward-api-ab303910-7e6b-4f6b-8205-616e4ba1d06b": Phase="Pending", Reason="", readiness=false. Elapsed: 63.275542ms May 26 00:31:01.000: INFO: Pod "downward-api-ab303910-7e6b-4f6b-8205-616e4ba1d06b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067412067s May 26 00:31:03.004: INFO: Pod "downward-api-ab303910-7e6b-4f6b-8205-616e4ba1d06b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071898161s STEP: Saw pod success May 26 00:31:03.005: INFO: Pod "downward-api-ab303910-7e6b-4f6b-8205-616e4ba1d06b" satisfied condition "Succeeded or Failed" May 26 00:31:03.007: INFO: Trying to get logs from node latest-worker pod downward-api-ab303910-7e6b-4f6b-8205-616e4ba1d06b container dapi-container: STEP: delete the pod May 26 00:31:03.136: INFO: Waiting for pod downward-api-ab303910-7e6b-4f6b-8205-616e4ba1d06b to disappear May 26 00:31:03.143: INFO: Pod downward-api-ab303910-7e6b-4f6b-8205-616e4ba1d06b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:31:03.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7986" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":165,"skipped":2850,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:31:03.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 26 00:31:03.263: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1053 /api/v1/namespaces/watch-1053/configmaps/e2e-watch-test-label-changed 286b6315-ced8-482a-af79-061d969911a5 7689069 0 2020-05-26 00:31:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-26 00:31:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 26 00:31:03.263: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1053 /api/v1/namespaces/watch-1053/configmaps/e2e-watch-test-label-changed 286b6315-ced8-482a-af79-061d969911a5 7689070 0 2020-05-26 00:31:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-26 00:31:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 26 00:31:03.263: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1053 /api/v1/namespaces/watch-1053/configmaps/e2e-watch-test-label-changed 286b6315-ced8-482a-af79-061d969911a5 7689071 0 2020-05-26 00:31:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-26 00:31:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 26 00:31:13.297: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1053 /api/v1/namespaces/watch-1053/configmaps/e2e-watch-test-label-changed 286b6315-ced8-482a-af79-061d969911a5 7689118 0 2020-05-26 00:31:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-26 00:31:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 26 00:31:13.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1053 /api/v1/namespaces/watch-1053/configmaps/e2e-watch-test-label-changed 286b6315-ced8-482a-af79-061d969911a5 7689119 0 2020-05-26 00:31:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-26 00:31:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 26 00:31:13.298: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1053 /api/v1/namespaces/watch-1053/configmaps/e2e-watch-test-label-changed 286b6315-ced8-482a-af79-061d969911a5 7689120 0 2020-05-26 00:31:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-26 00:31:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:31:13.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1053" for this suite. • [SLOW TEST:10.153 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":166,"skipped":2855,"failed":0} [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:31:13.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 26 00:31:13.371: INFO: Waiting up to 5m0s for pod "downward-api-d590723c-04d0-4116-bcac-15336ec506cd" in namespace "downward-api-7575" to be "Succeeded or Failed" May 26 00:31:13.408: INFO: Pod "downward-api-d590723c-04d0-4116-bcac-15336ec506cd": Phase="Pending", Reason="", readiness=false. Elapsed: 37.507195ms May 26 00:31:15.412: INFO: Pod "downward-api-d590723c-04d0-4116-bcac-15336ec506cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041232511s May 26 00:31:17.416: INFO: Pod "downward-api-d590723c-04d0-4116-bcac-15336ec506cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044691147s STEP: Saw pod success May 26 00:31:17.416: INFO: Pod "downward-api-d590723c-04d0-4116-bcac-15336ec506cd" satisfied condition "Succeeded or Failed" May 26 00:31:17.419: INFO: Trying to get logs from node latest-worker2 pod downward-api-d590723c-04d0-4116-bcac-15336ec506cd container dapi-container: STEP: delete the pod May 26 00:31:17.462: INFO: Waiting for pod downward-api-d590723c-04d0-4116-bcac-15336ec506cd to disappear May 26 00:31:17.467: INFO: Pod downward-api-d590723c-04d0-4116-bcac-15336ec506cd no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:31:17.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7575" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":167,"skipped":2855,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:31:17.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-431f826c-8a44-4c4e-ab79-59bb15224cef STEP: Creating a pod to test consume secrets May 26 00:31:17.705: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e37d6773-56d7-47ab-b079-6efa85dc4a3c" in namespace "projected-7582" to be "Succeeded or Failed" May 26 00:31:17.865: INFO: Pod "pod-projected-secrets-e37d6773-56d7-47ab-b079-6efa85dc4a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 159.305712ms May 26 00:31:19.869: INFO: Pod "pod-projected-secrets-e37d6773-56d7-47ab-b079-6efa85dc4a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163558529s May 26 00:31:21.873: INFO: Pod "pod-projected-secrets-e37d6773-56d7-47ab-b079-6efa85dc4a3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16750306s STEP: Saw pod success May 26 00:31:21.873: INFO: Pod "pod-projected-secrets-e37d6773-56d7-47ab-b079-6efa85dc4a3c" satisfied condition "Succeeded or Failed" May 26 00:31:21.876: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-e37d6773-56d7-47ab-b079-6efa85dc4a3c container projected-secret-volume-test: STEP: delete the pod May 26 00:31:21.906: INFO: Waiting for pod pod-projected-secrets-e37d6773-56d7-47ab-b079-6efa85dc4a3c to disappear May 26 00:31:21.910: INFO: Pod pod-projected-secrets-e37d6773-56d7-47ab-b079-6efa85dc4a3c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:31:21.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7582" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":168,"skipped":2873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:31:21.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 26 00:31:22.015: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 00:31:22.100: INFO: Waiting for terminating namespaces to be deleted... May 26 00:31:22.102: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 26 00:31:22.108: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 26 00:31:22.108: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 26 00:31:22.108: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 26 00:31:22.108: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 26 00:31:22.108: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 00:31:22.108: INFO: Container kindnet-cni ready: true, restart count 0 May 26 00:31:22.108: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 00:31:22.108: INFO: Container kube-proxy ready: true, restart count 0 May 26 00:31:22.108: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 26 00:31:22.112: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 26 00:31:22.112: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 26 00:31:22.112: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 26 00:31:22.112: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 26 00:31:22.112: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 00:31:22.112: INFO: Container kindnet-cni ready: true, restart count 0 May 26 00:31:22.112: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 00:31:22.112: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16126ca6d0675eb5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.16126ca6d1dfd4f4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:31:23.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5948" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":169,"skipped":2899,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:31:23.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0526 00:31:33.281535 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 00:31:33.281: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:31:33.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9671" for this suite. • [SLOW TEST:10.146 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":170,"skipped":2941,"failed":0} SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:31:33.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-24 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-24 to expose endpoints map[] May 26 00:31:33.445: INFO: Get endpoints failed (38.528673ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 26 00:31:34.449: INFO: successfully validated that service endpoint-test2 in namespace services-24 exposes endpoints map[] (1.042745643s elapsed) STEP: Creating pod pod1 in namespace services-24 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-24 to expose endpoints map[pod1:[80]] May 26 00:31:37.576: INFO: successfully validated that service endpoint-test2 in namespace services-24 exposes endpoints map[pod1:[80]] (3.118128311s elapsed) STEP: Creating pod pod2 in namespace services-24 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-24 to expose endpoints map[pod1:[80] pod2:[80]] May 26 00:31:40.717: INFO: successfully validated that service endpoint-test2 in namespace services-24 exposes endpoints map[pod1:[80] pod2:[80]] (3.137045999s elapsed) STEP: Deleting pod pod1 in namespace services-24 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-24 to expose endpoints map[pod2:[80]] May 26 00:31:41.804: INFO: successfully validated that service endpoint-test2 in namespace services-24 exposes endpoints map[pod2:[80]] (1.081829246s elapsed) STEP: Deleting pod pod2 in namespace services-24 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-24 to expose endpoints map[] May 26 00:31:42.856: INFO: successfully validated that service endpoint-test2 in namespace services-24 exposes endpoints map[] (1.047328936s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:31:42.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-24" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:9.614 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":171,"skipped":2944,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:31:42.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 26 00:31:43.864: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 26 00:31:46.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049903, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049903, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049903, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726049903, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 00:31:49.205: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:31:49.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:31:50.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6984" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.620 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":172,"skipped":2952,"failed":0} [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:31:50.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 26 00:33:51.190: INFO: Successfully updated pod "var-expansion-2367fad9-f505-4d15-9180-a8de5c0b5059" STEP: waiting for pod running STEP: deleting the pod gracefully May 26 00:33:55.259: INFO: Deleting pod "var-expansion-2367fad9-f505-4d15-9180-a8de5c0b5059" in namespace "var-expansion-9874" May 26 00:33:55.264: INFO: Wait up to 5m0s for pod "var-expansion-2367fad9-f505-4d15-9180-a8de5c0b5059" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:34:35.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9874" for this suite. • [SLOW TEST:164.772 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":173,"skipped":2952,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:34:35.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 26 00:34:35.367: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 26 00:34:35.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-919' May 26 00:34:37.313: INFO: stderr: "" May 26 00:34:37.313: INFO: stdout: "service/agnhost-slave created\n" May 26 00:34:37.313: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 26 00:34:37.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-919' May 26 00:34:39.847: INFO: stderr: "" May 26 00:34:39.847: INFO: stdout: "service/agnhost-master created\n" May 26 00:34:39.847: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 26 00:34:39.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-919' May 26 00:34:41.441: INFO: stderr: "" May 26 00:34:41.441: INFO: stdout: "service/frontend created\n" May 26 00:34:41.442: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 26 00:34:41.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-919' May 26 00:34:43.104: INFO: stderr: "" May 26 00:34:43.104: INFO: stdout: "deployment.apps/frontend created\n" May 26 00:34:43.104: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 26 00:34:43.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-919' May 26 00:34:44.807: INFO: stderr: "" May 26 00:34:44.807: INFO: stdout: "deployment.apps/agnhost-master created\n" May 26 00:34:44.807: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 26 00:34:44.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-919' May 26 00:34:48.417: INFO: stderr: "" May 26 00:34:48.417: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 26 00:34:48.417: INFO: Waiting for all frontend pods to be Running. May 26 00:34:53.467: INFO: Waiting for frontend to serve content. May 26 00:34:54.507: INFO: Trying to add a new entry to the guestbook. May 26 00:34:54.520: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 26 00:34:54.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-919' May 26 00:34:54.665: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 00:34:54.665: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 26 00:34:54.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-919' May 26 00:34:54.869: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 00:34:54.869: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 26 00:34:54.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-919' May 26 00:34:55.084: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 00:34:55.084: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 26 00:34:55.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-919' May 26 00:34:55.211: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 00:34:55.211: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 26 00:34:55.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-919' May 26 00:34:55.866: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 00:34:55.866: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 26 00:34:55.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-919' May 26 00:34:56.088: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 00:34:56.088: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:34:56.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-919" for this suite. • [SLOW TEST:21.374 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":174,"skipped":2956,"failed":0} [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:34:56.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 26 00:34:57.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-5824 -- logs-generator --log-lines-total 100 --run-duration 20s' May 26 00:34:57.500: INFO: stderr: "" May 26 00:34:57.500: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 26 00:34:57.500: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 26 00:34:57.500: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5824" to be "running and ready, or succeeded" May 26 00:34:57.566: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 65.926478ms May 26 00:34:59.571: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070416253s May 26 00:35:01.574: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074242735s May 26 00:35:03.578: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.078171388s May 26 00:35:03.578: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 26 00:35:03.578: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 26 00:35:03.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5824' May 26 00:35:03.710: INFO: stderr: "" May 26 00:35:03.710: INFO: stdout: "I0526 00:35:00.736350 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/z5cr 203\nI0526 00:35:00.936580 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/nf5 365\nI0526 00:35:01.136478 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/f87 441\nI0526 00:35:01.339102 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/dwxq 216\nI0526 00:35:01.536542 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/jk8k 530\nI0526 00:35:01.736559 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/kgwb 348\nI0526 00:35:01.936576 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/tts 328\nI0526 00:35:02.136532 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/p98 485\nI0526 00:35:02.336525 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/pbz 414\nI0526 00:35:02.536522 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/qdl 513\nI0526 00:35:02.736543 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/8wn 486\nI0526 00:35:02.936635 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/mvl 422\nI0526 00:35:03.136497 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/m5s 501\nI0526 00:35:03.336570 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/jqn 371\nI0526 00:35:03.536541 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/dplp 421\n" STEP: limiting log lines May 26 00:35:03.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5824 --tail=1' May 26 00:35:03.822: INFO: stderr: "" May 26 00:35:03.822: INFO: stdout: "I0526 00:35:03.736503 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/fcgr 324\n" May 26 00:35:03.823: INFO: got output "I0526 00:35:03.736503 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/fcgr 324\n" STEP: limiting log bytes May 26 00:35:03.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5824 --limit-bytes=1' May 26 00:35:03.937: INFO: stderr: "" May 26 00:35:03.937: INFO: stdout: "I" May 26 00:35:03.937: INFO: got output "I" STEP: exposing timestamps May 26 00:35:03.937: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5824 --tail=1 --timestamps' May 26 00:35:04.046: INFO: stderr: "" May 26 00:35:04.046: INFO: stdout: "2020-05-26T00:35:03.93666684Z I0526 00:35:03.936525 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/cf2s 247\n" May 26 00:35:04.046: INFO: got output "2020-05-26T00:35:03.93666684Z I0526 00:35:03.936525 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/cf2s 247\n" STEP: restricting to a time range May 26 00:35:06.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5824 --since=1s' May 26 00:35:06.646: INFO: stderr: "" May 26 00:35:06.646: INFO: stdout: "I0526 00:35:05.736465 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/fkvk 210\nI0526 00:35:05.936511 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/9jj 535\nI0526 00:35:06.136578 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/h5b 307\nI0526 00:35:06.336523 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/qrs2 511\nI0526 00:35:06.536533 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/dnr 420\n" May 26 00:35:06.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5824 --since=24h' May 26 00:35:06.749: INFO: stderr: "" May 26 00:35:06.749: INFO: stdout: "I0526 00:35:00.736350 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/z5cr 203\nI0526 00:35:00.936580 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/nf5 365\nI0526 00:35:01.136478 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/f87 441\nI0526 00:35:01.339102 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/dwxq 216\nI0526 00:35:01.536542 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/jk8k 530\nI0526 00:35:01.736559 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/kgwb 348\nI0526 00:35:01.936576 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/tts 328\nI0526 00:35:02.136532 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/p98 485\nI0526 00:35:02.336525 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/pbz 414\nI0526 00:35:02.536522 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/qdl 513\nI0526 00:35:02.736543 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/8wn 486\nI0526 00:35:02.936635 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/mvl 422\nI0526 00:35:03.136497 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/m5s 501\nI0526 00:35:03.336570 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/jqn 371\nI0526 00:35:03.536541 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/dplp 421\nI0526 00:35:03.736503 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/fcgr 324\nI0526 00:35:03.936525 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/cf2s 247\nI0526 00:35:04.136533 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/x6q 369\nI0526 00:35:04.336581 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/rx4 381\nI0526 00:35:04.536507 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/8qt 565\nI0526 00:35:04.736622 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/jfq 546\nI0526 00:35:04.936533 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/nqz 337\nI0526 00:35:05.136584 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/xbs 273\nI0526 00:35:05.336521 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/4zfq 430\nI0526 00:35:05.536541 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/r2b6 461\nI0526 00:35:05.736465 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/fkvk 210\nI0526 00:35:05.936511 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/9jj 535\nI0526 00:35:06.136578 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/h5b 307\nI0526 00:35:06.336523 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/qrs2 511\nI0526 00:35:06.536533 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/dnr 420\nI0526 00:35:06.736547 1 logs_generator.go:76] 30 PUT /api/v1/namespaces/ns/pods/9cc 270\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 26 00:35:06.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5824' May 26 00:35:09.697: INFO: stderr: "" May 26 00:35:09.697: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:35:09.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5824" for this suite. • [SLOW TEST:13.034 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":175,"skipped":2956,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:35:09.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 26 00:35:09.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9656' May 26 00:35:11.558: INFO: stderr: "" May 26 00:35:11.558: INFO: stdout: "pod/pause created\n" May 26 00:35:11.558: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 26 00:35:11.558: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9656" to be "running and ready" May 26 00:35:11.566: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.415169ms May 26 00:35:13.570: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011443164s May 26 00:35:15.574: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.015663717s May 26 00:35:15.574: INFO: Pod "pause" satisfied condition "running and ready" May 26 00:35:15.574: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 26 00:35:15.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9656' May 26 00:35:15.689: INFO: stderr: "" May 26 00:35:15.689: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 26 00:35:15.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9656' May 26 00:35:15.789: INFO: stderr: "" May 26 00:35:15.789: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 26 00:35:15.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9656' May 26 00:35:15.909: INFO: stderr: "" May 26 00:35:15.909: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 26 00:35:15.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9656' May 26 00:35:16.059: INFO: stderr: "" May 26 00:35:16.059: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 26 00:35:16.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9656' May 26 00:35:16.187: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 00:35:16.187: INFO: stdout: "pod \"pause\" force deleted\n" May 26 00:35:16.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9656' May 26 00:35:16.294: INFO: stderr: "No resources found in kubectl-9656 namespace.\n" May 26 00:35:16.294: INFO: stdout: "" May 26 00:35:16.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9656 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 26 00:35:16.400: INFO: stderr: "" May 26 00:35:16.400: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:35:16.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9656" for this suite. • [SLOW TEST:6.702 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":176,"skipped":2980,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:35:16.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-0baaec7e-2858-4ff8-9448-221f4bf7125e STEP: Creating a pod to test consume secrets May 26 00:35:16.736: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8665d642-6842-426c-8d04-a0d94d7fa054" in namespace "projected-6474" to be "Succeeded or Failed" May 26 00:35:16.750: INFO: Pod "pod-projected-secrets-8665d642-6842-426c-8d04-a0d94d7fa054": Phase="Pending", Reason="", readiness=false. Elapsed: 14.498635ms May 26 00:35:18.754: INFO: Pod "pod-projected-secrets-8665d642-6842-426c-8d04-a0d94d7fa054": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018663487s May 26 00:35:20.776: INFO: Pod "pod-projected-secrets-8665d642-6842-426c-8d04-a0d94d7fa054": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040276389s STEP: Saw pod success May 26 00:35:20.776: INFO: Pod "pod-projected-secrets-8665d642-6842-426c-8d04-a0d94d7fa054" satisfied condition "Succeeded or Failed" May 26 00:35:20.779: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-8665d642-6842-426c-8d04-a0d94d7fa054 container secret-volume-test: STEP: delete the pod May 26 00:35:20.795: INFO: Waiting for pod pod-projected-secrets-8665d642-6842-426c-8d04-a0d94d7fa054 to disappear May 26 00:35:20.826: INFO: Pod pod-projected-secrets-8665d642-6842-426c-8d04-a0d94d7fa054 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:35:20.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6474" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":177,"skipped":2995,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:35:20.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7461.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7461.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7461.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7461.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7461.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7461.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 00:35:27.305: INFO: DNS probes using dns-7461/dns-test-8f8c0591-0f5a-4943-91c4-1452d4f83742 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:35:27.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7461" for this suite. • [SLOW TEST:6.630 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":178,"skipped":3000,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:35:27.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:35:27.567: INFO: Creating deployment "test-recreate-deployment" May 26 00:35:27.578: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 26 00:35:27.590: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 26 00:35:29.599: INFO: Waiting deployment "test-recreate-deployment" to complete May 26 00:35:29.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726050127, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726050127, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726050128, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726050127, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 00:35:31.605: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 26 00:35:31.613: INFO: Updating deployment test-recreate-deployment May 26 00:35:31.613: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 26 00:35:32.304: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1567 /apis/apps/v1/namespaces/deployment-1567/deployments/test-recreate-deployment d40d543d-d823-4bd7-8cb9-6f005a0afcfe 7690539 2 2020-05-26 00:35:27 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-26 00:35:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-26 00:35:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0020e1c88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-26 00:35:31 +0000 UTC,LastTransitionTime:2020-05-26 00:35:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-26 00:35:32 +0000 UTC,LastTransitionTime:2020-05-26 00:35:27 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 26 00:35:32.309: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-1567 /apis/apps/v1/namespaces/deployment-1567/replicasets/test-recreate-deployment-d5667d9c7 9b5dfc9d-ac8a-4fe2-8101-dcec082aff68 7690536 1 2020-05-26 00:35:31 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment d40d543d-d823-4bd7-8cb9-6f005a0afcfe 0xc002c9c1a0 0xc002c9c1a1}] [] [{kube-controller-manager Update apps/v1 2020-05-26 00:35:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d40d543d-d823-4bd7-8cb9-6f005a0afcfe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c9c218 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 00:35:32.309: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 26 00:35:32.309: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-1567 /apis/apps/v1/namespaces/deployment-1567/replicasets/test-recreate-deployment-6d65b9f6d8 e3d16e8e-48bf-43b5-9fea-b692f36f93f5 7690527 2 2020-05-26 00:35:27 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment d40d543d-d823-4bd7-8cb9-6f005a0afcfe 0xc002c9c0a7 0xc002c9c0a8}] [] [{kube-controller-manager Update apps/v1 2020-05-26 00:35:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d40d543d-d823-4bd7-8cb9-6f005a0afcfe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c9c138 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 00:35:32.328: INFO: Pod "test-recreate-deployment-d5667d9c7-4h656" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-4h656 test-recreate-deployment-d5667d9c7- deployment-1567 /api/v1/namespaces/deployment-1567/pods/test-recreate-deployment-d5667d9c7-4h656 90692c71-5007-4fb8-83e9-ce9d2f417a5f 7690540 0 2020-05-26 00:35:31 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 9b5dfc9d-ac8a-4fe2-8101-dcec082aff68 0xc002c9c700 0xc002c9c701}] [] [{kube-controller-manager Update v1 2020-05-26 00:35:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b5dfc9d-ac8a-4fe2-8101-dcec082aff68\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 00:35:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nqhjc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nqhjc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nqhjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:35:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:35:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:35:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:35:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-26 00:35:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:35:32.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1567" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":179,"skipped":3017,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:35:32.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:35:32.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5cc4292d-1fd2-4b32-a512-0e822a0f0a28" in namespace "downward-api-8195" to be "Succeeded or Failed" May 26 00:35:32.520: INFO: Pod "downwardapi-volume-5cc4292d-1fd2-4b32-a512-0e822a0f0a28": Phase="Pending", Reason="", readiness=false. Elapsed: 11.173357ms May 26 00:35:34.837: INFO: Pod "downwardapi-volume-5cc4292d-1fd2-4b32-a512-0e822a0f0a28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327733354s May 26 00:35:36.840: INFO: Pod "downwardapi-volume-5cc4292d-1fd2-4b32-a512-0e822a0f0a28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331487931s STEP: Saw pod success May 26 00:35:36.840: INFO: Pod "downwardapi-volume-5cc4292d-1fd2-4b32-a512-0e822a0f0a28" satisfied condition "Succeeded or Failed" May 26 00:35:36.843: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5cc4292d-1fd2-4b32-a512-0e822a0f0a28 container client-container: STEP: delete the pod May 26 00:35:36.907: INFO: Waiting for pod downwardapi-volume-5cc4292d-1fd2-4b32-a512-0e822a0f0a28 to disappear May 26 00:35:36.920: INFO: Pod downwardapi-volume-5cc4292d-1fd2-4b32-a512-0e822a0f0a28 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:35:36.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8195" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":180,"skipped":3017,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:35:36.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 26 00:35:37.018: INFO: Waiting up to 5m0s for pod "var-expansion-2ec90249-744f-4bea-94c1-42b87ff8d9f2" in namespace "var-expansion-7447" to be "Succeeded or Failed" May 26 00:35:37.028: INFO: Pod "var-expansion-2ec90249-744f-4bea-94c1-42b87ff8d9f2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.193717ms May 26 00:35:39.094: INFO: Pod "var-expansion-2ec90249-744f-4bea-94c1-42b87ff8d9f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075852029s May 26 00:35:41.098: INFO: Pod "var-expansion-2ec90249-744f-4bea-94c1-42b87ff8d9f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079959755s STEP: Saw pod success May 26 00:35:41.098: INFO: Pod "var-expansion-2ec90249-744f-4bea-94c1-42b87ff8d9f2" satisfied condition "Succeeded or Failed" May 26 00:35:41.101: INFO: Trying to get logs from node latest-worker pod var-expansion-2ec90249-744f-4bea-94c1-42b87ff8d9f2 container dapi-container: STEP: delete the pod May 26 00:35:41.168: INFO: Waiting for pod var-expansion-2ec90249-744f-4bea-94c1-42b87ff8d9f2 to disappear May 26 00:35:41.172: INFO: Pod var-expansion-2ec90249-744f-4bea-94c1-42b87ff8d9f2 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:35:41.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7447" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":181,"skipped":3037,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:35:41.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-e5bc5c44-bf66-45a4-a5d6-08abed1b3c42 STEP: Creating secret with name s-test-opt-upd-66c5c3ac-60b2-4f7b-bf2e-425c9741ce5c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e5bc5c44-bf66-45a4-a5d6-08abed1b3c42 STEP: Updating secret s-test-opt-upd-66c5c3ac-60b2-4f7b-bf2e-425c9741ce5c STEP: Creating secret with name s-test-opt-create-b96a16da-bdc9-409d-bb3b-d670bc24f912 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:35:49.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2115" for this suite. • [SLOW TEST:8.286 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":182,"skipped":3044,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:35:49.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-17a923c4-7d40-4f3a-b62a-03c00b9f4b97 STEP: Creating a pod to test consume secrets May 26 00:35:49.533: INFO: Waiting up to 5m0s for pod "pod-secrets-56d8164a-fadc-4a4a-aa1c-c695c07c2ed4" in namespace "secrets-6991" to be "Succeeded or Failed" May 26 00:35:49.537: INFO: Pod "pod-secrets-56d8164a-fadc-4a4a-aa1c-c695c07c2ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.996246ms May 26 00:35:51.542: INFO: Pod "pod-secrets-56d8164a-fadc-4a4a-aa1c-c695c07c2ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008711547s May 26 00:35:53.547: INFO: Pod "pod-secrets-56d8164a-fadc-4a4a-aa1c-c695c07c2ed4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014111169s STEP: Saw pod success May 26 00:35:53.547: INFO: Pod "pod-secrets-56d8164a-fadc-4a4a-aa1c-c695c07c2ed4" satisfied condition "Succeeded or Failed" May 26 00:35:53.551: INFO: Trying to get logs from node latest-worker pod pod-secrets-56d8164a-fadc-4a4a-aa1c-c695c07c2ed4 container secret-volume-test: STEP: delete the pod May 26 00:35:53.640: INFO: Waiting for pod pod-secrets-56d8164a-fadc-4a4a-aa1c-c695c07c2ed4 to disappear May 26 00:35:53.657: INFO: Pod pod-secrets-56d8164a-fadc-4a4a-aa1c-c695c07c2ed4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:35:53.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6991" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":183,"skipped":3045,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:35:53.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:36:04.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6805" for this suite. • [SLOW TEST:11.240 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":184,"skipped":3049,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:36:04.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:36:05.004: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 26 00:36:07.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2743 create -f -' May 26 00:36:11.628: INFO: stderr: "" May 26 00:36:11.628: INFO: stdout: "e2e-test-crd-publish-openapi-5665-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 26 00:36:11.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2743 delete e2e-test-crd-publish-openapi-5665-crds test-cr' May 26 00:36:11.766: INFO: stderr: "" May 26 00:36:11.766: INFO: stdout: "e2e-test-crd-publish-openapi-5665-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 26 00:36:11.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2743 apply -f -' May 26 00:36:13.087: INFO: stderr: "" May 26 00:36:13.087: INFO: stdout: "e2e-test-crd-publish-openapi-5665-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 26 00:36:13.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2743 delete e2e-test-crd-publish-openapi-5665-crds test-cr' May 26 00:36:13.195: INFO: stderr: "" May 26 00:36:13.195: INFO: stdout: "e2e-test-crd-publish-openapi-5665-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 26 00:36:13.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5665-crds' May 26 00:36:13.926: INFO: stderr: "" May 26 00:36:13.926: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5665-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:36:15.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2743" for this suite. • [SLOW TEST:10.960 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":185,"skipped":3053,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:36:15.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 26 00:36:20.466: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5525 pod-service-account-24800f27-1bb7-4b73-a64c-99ca7da9573b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 26 00:36:20.700: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5525 pod-service-account-24800f27-1bb7-4b73-a64c-99ca7da9573b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 26 00:36:20.914: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5525 pod-service-account-24800f27-1bb7-4b73-a64c-99ca7da9573b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:36:21.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5525" for this suite. • [SLOW TEST:5.261 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":186,"skipped":3056,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:36:21.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 26 00:36:21.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4135' May 26 00:36:21.370: INFO: stderr: "" May 26 00:36:21.370: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 26 00:36:26.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-4135 -o json' May 26 00:36:26.537: INFO: stderr: "" May 26 00:36:26.537: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-26T00:36:21Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-26T00:36:21Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.199\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-26T00:36:24Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-4135\",\n \"resourceVersion\": \"7690920\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4135/pods/e2e-test-httpd-pod\",\n \"uid\": \"df325ff5-6a91-4d93-b695-cba4aa35c4bd\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-kvqm9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-kvqm9\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-kvqm9\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-26T00:36:21Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-26T00:36:24Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-26T00:36:24Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-26T00:36:21Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://c8e997818097d512a557a444d7eb9ea36d548816c2aaff86f30326458a30a9ff\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-26T00:36:24Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.199\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.199\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-26T00:36:21Z\"\n }\n}\n" STEP: replace the image in the pod May 26 00:36:26.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4135' May 26 00:36:26.949: INFO: stderr: "" May 26 00:36:26.949: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 26 00:36:26.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4135' May 26 00:36:29.897: INFO: stderr: "" May 26 00:36:29.897: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:36:29.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4135" for this suite. • [SLOW TEST:8.778 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":187,"skipped":3072,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:36:29.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:36:29.995: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 26 00:36:34.999: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 26 00:36:34.999: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 26 00:36:35.059: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3670 /apis/apps/v1/namespaces/deployment-3670/deployments/test-cleanup-deployment ebaff6ae-2024-47a4-bcb5-24f3aa4336ef 7690994 1 2020-05-26 00:36:35 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-26 00:36:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003658a98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 26 00:36:35.155: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-3670 /apis/apps/v1/namespaces/deployment-3670/replicasets/test-cleanup-deployment-6688745694 d400b2c7-0363-42cf-95f1-4d9bdeb77ffb 7690996 1 2020-05-26 00:36:35 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment ebaff6ae-2024-47a4-bcb5-24f3aa4336ef 0xc00352b847 0xc00352b848}] [] [{kube-controller-manager Update apps/v1 2020-05-26 00:36:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ebaff6ae-2024-47a4-bcb5-24f3aa4336ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00352b918 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 00:36:35.155: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 26 00:36:35.155: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3670 /apis/apps/v1/namespaces/deployment-3670/replicasets/test-cleanup-controller 3e02becc-9dbf-42c3-a669-6724b9054ca7 7690995 1 2020-05-26 00:36:29 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment ebaff6ae-2024-47a4-bcb5-24f3aa4336ef 0xc00352b72f 0xc00352b740}] [] [{e2e.test Update apps/v1 2020-05-26 00:36:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-26 00:36:35 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"ebaff6ae-2024-47a4-bcb5-24f3aa4336ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00352b7d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 26 00:36:35.232: INFO: Pod "test-cleanup-controller-q8fgk" is available: &Pod{ObjectMeta:{test-cleanup-controller-q8fgk test-cleanup-controller- deployment-3670 /api/v1/namespaces/deployment-3670/pods/test-cleanup-controller-q8fgk cb5862a0-bdbb-4587-b9e0-93e65a88a384 7690984 0 2020-05-26 00:36:29 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 3e02becc-9dbf-42c3-a669-6724b9054ca7 0xc003622f47 0xc003622f48}] [] [{kube-controller-manager Update v1 2020-05-26 00:36:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e02becc-9dbf-42c3-a669-6724b9054ca7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 00:36:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.200\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j7s2m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j7s2m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j7s2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:36:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:36:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:36:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:36:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.200,StartTime:2020-05-26 00:36:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 00:36:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b4d4c33c519b088c50674c79d9776c9bb41fedf8dfebd6ae5b6ed9161f0ea501,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.200,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 00:36:35.233: INFO: Pod "test-cleanup-deployment-6688745694-68b4v" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-68b4v test-cleanup-deployment-6688745694- deployment-3670 /api/v1/namespaces/deployment-3670/pods/test-cleanup-deployment-6688745694-68b4v 92656b7d-675b-4279-97ec-b61909c7e53a 7691001 0 2020-05-26 00:36:35 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 d400b2c7-0363-42cf-95f1-4d9bdeb77ffb 0xc003623127 0xc003623128}] [] [{kube-controller-manager Update v1 2020-05-26 00:36:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d400b2c7-0363-42cf-95f1-4d9bdeb77ffb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j7s2m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j7s2m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j7s2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 00:36:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:36:35.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3670" for this suite. • [SLOW TEST:5.368 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":188,"skipped":3080,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:36:35.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:36:35.548: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-021c4aa9-4a3a-42c5-9ee8-db76a70d1ef5" in namespace "security-context-test-5682" to be "Succeeded or Failed" May 26 00:36:35.624: INFO: Pod "alpine-nnp-false-021c4aa9-4a3a-42c5-9ee8-db76a70d1ef5": Phase="Pending", Reason="", readiness=false. Elapsed: 76.044974ms May 26 00:36:37.720: INFO: Pod "alpine-nnp-false-021c4aa9-4a3a-42c5-9ee8-db76a70d1ef5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172273415s May 26 00:36:39.740: INFO: Pod "alpine-nnp-false-021c4aa9-4a3a-42c5-9ee8-db76a70d1ef5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192310673s May 26 00:36:41.753: INFO: Pod "alpine-nnp-false-021c4aa9-4a3a-42c5-9ee8-db76a70d1ef5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.204836659s May 26 00:36:41.753: INFO: Pod "alpine-nnp-false-021c4aa9-4a3a-42c5-9ee8-db76a70d1ef5" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:36:41.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5682" for this suite. • [SLOW TEST:6.660 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":189,"skipped":3105,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:36:41.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-0173eaf8-4ad1-49b7-a4ac-cf4cb8f59fef STEP: Creating a pod to test consume configMaps May 26 00:36:42.235: INFO: Waiting up to 5m0s for pod "pod-configmaps-1f3fda1d-5c15-4ab8-b929-8cc1c4d215ad" in namespace "configmap-5560" to be "Succeeded or Failed" May 26 00:36:42.276: INFO: Pod "pod-configmaps-1f3fda1d-5c15-4ab8-b929-8cc1c4d215ad": Phase="Pending", Reason="", readiness=false. Elapsed: 40.542185ms May 26 00:36:44.336: INFO: Pod "pod-configmaps-1f3fda1d-5c15-4ab8-b929-8cc1c4d215ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100710174s May 26 00:36:46.442: INFO: Pod "pod-configmaps-1f3fda1d-5c15-4ab8-b929-8cc1c4d215ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.206629971s STEP: Saw pod success May 26 00:36:46.442: INFO: Pod "pod-configmaps-1f3fda1d-5c15-4ab8-b929-8cc1c4d215ad" satisfied condition "Succeeded or Failed" May 26 00:36:46.450: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1f3fda1d-5c15-4ab8-b929-8cc1c4d215ad container configmap-volume-test: STEP: delete the pod May 26 00:36:46.508: INFO: Waiting for pod pod-configmaps-1f3fda1d-5c15-4ab8-b929-8cc1c4d215ad to disappear May 26 00:36:46.540: INFO: Pod pod-configmaps-1f3fda1d-5c15-4ab8-b929-8cc1c4d215ad no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:36:46.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5560" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":190,"skipped":3119,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:36:46.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-42ed279a-156a-4306-8207-bf3af73ca3a0 STEP: Creating a pod to test consume configMaps May 26 00:36:46.662: INFO: Waiting up to 5m0s for pod "pod-configmaps-bdb8d420-64ae-40ed-98bb-9decd0bff230" in namespace "configmap-1999" to be "Succeeded or Failed" May 26 00:36:46.693: INFO: Pod "pod-configmaps-bdb8d420-64ae-40ed-98bb-9decd0bff230": Phase="Pending", Reason="", readiness=false. Elapsed: 30.885179ms May 26 00:36:48.697: INFO: Pod "pod-configmaps-bdb8d420-64ae-40ed-98bb-9decd0bff230": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035272688s May 26 00:36:50.701: INFO: Pod "pod-configmaps-bdb8d420-64ae-40ed-98bb-9decd0bff230": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039302963s STEP: Saw pod success May 26 00:36:50.702: INFO: Pod "pod-configmaps-bdb8d420-64ae-40ed-98bb-9decd0bff230" satisfied condition "Succeeded or Failed" May 26 00:36:50.704: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-bdb8d420-64ae-40ed-98bb-9decd0bff230 container configmap-volume-test: STEP: delete the pod May 26 00:36:50.722: INFO: Waiting for pod pod-configmaps-bdb8d420-64ae-40ed-98bb-9decd0bff230 to disappear May 26 00:36:50.747: INFO: Pod pod-configmaps-bdb8d420-64ae-40ed-98bb-9decd0bff230 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:36:50.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1999" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":191,"skipped":3131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:36:50.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 26 00:36:55.061: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:36:55.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7068" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":192,"skipped":3213,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:36:55.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 26 00:36:55.183: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2842" to be "Succeeded or Failed" May 26 00:36:55.226: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 42.835577ms May 26 00:36:57.274: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091001829s May 26 00:36:59.310: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126860064s May 26 00:37:01.315: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131545242s STEP: Saw pod success May 26 00:37:01.315: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 26 00:37:01.318: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 26 00:37:01.381: INFO: Waiting for pod pod-host-path-test to disappear May 26 00:37:01.384: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:37:01.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2842" for this suite. • [SLOW TEST:6.264 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":193,"skipped":3223,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:37:01.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 26 00:37:06.007: INFO: Successfully updated pod "pod-update-41d946a1-0388-4072-97b1-9ebd5c3044d2" STEP: verifying the updated pod is in kubernetes May 26 00:37:06.017: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:37:06.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4181" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":194,"skipped":3232,"failed":0} ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:37:06.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3251.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3251.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3251.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3251.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3251.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3251.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3251.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3251.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3251.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3251.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 154.234.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.234.154_udp@PTR;check="$$(dig +tcp +noall +answer +search 154.234.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.234.154_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3251.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3251.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3251.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3251.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3251.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3251.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3251.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3251.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3251.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3251.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3251.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 154.234.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.234.154_udp@PTR;check="$$(dig +tcp +noall +answer +search 154.234.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.234.154_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 00:37:12.328: INFO: Unable to read wheezy_udp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:12.331: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:12.334: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:12.337: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:12.355: INFO: Unable to read jessie_udp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:12.357: INFO: Unable to read jessie_tcp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:12.360: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:12.362: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:12.376: INFO: Lookups using dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1 failed for: [wheezy_udp@dns-test-service.dns-3251.svc.cluster.local wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local jessie_udp@dns-test-service.dns-3251.svc.cluster.local jessie_tcp@dns-test-service.dns-3251.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local] May 26 00:37:17.380: INFO: Unable to read wheezy_udp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:17.383: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:17.387: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:17.390: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:17.410: INFO: Unable to read jessie_udp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:17.413: INFO: Unable to read jessie_tcp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:17.415: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:17.419: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:17.435: INFO: Lookups using dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1 failed for: [wheezy_udp@dns-test-service.dns-3251.svc.cluster.local wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local jessie_udp@dns-test-service.dns-3251.svc.cluster.local jessie_tcp@dns-test-service.dns-3251.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local] May 26 00:37:22.382: INFO: Unable to read wheezy_udp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:22.385: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:22.388: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:22.391: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:22.412: INFO: Unable to read jessie_udp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:22.416: INFO: Unable to read jessie_tcp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:22.419: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:22.422: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:22.442: INFO: Lookups using dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1 failed for: [wheezy_udp@dns-test-service.dns-3251.svc.cluster.local wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local jessie_udp@dns-test-service.dns-3251.svc.cluster.local jessie_tcp@dns-test-service.dns-3251.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local] May 26 00:37:27.381: INFO: Unable to read wheezy_udp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:27.385: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:27.388: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:27.392: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:27.409: INFO: Unable to read jessie_udp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:27.411: INFO: Unable to read jessie_tcp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:27.414: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:27.416: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:27.437: INFO: Lookups using dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1 failed for: [wheezy_udp@dns-test-service.dns-3251.svc.cluster.local wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local jessie_udp@dns-test-service.dns-3251.svc.cluster.local jessie_tcp@dns-test-service.dns-3251.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local] May 26 00:37:32.382: INFO: Unable to read wheezy_udp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:32.386: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:32.390: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:32.393: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:32.414: INFO: Unable to read jessie_udp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:32.417: INFO: Unable to read jessie_tcp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:32.419: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:32.422: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:32.440: INFO: Lookups using dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1 failed for: [wheezy_udp@dns-test-service.dns-3251.svc.cluster.local wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local jessie_udp@dns-test-service.dns-3251.svc.cluster.local jessie_tcp@dns-test-service.dns-3251.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local] May 26 00:37:37.382: INFO: Unable to read wheezy_udp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:37.386: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:37.389: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:37.392: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:37.415: INFO: Unable to read jessie_udp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:37.419: INFO: Unable to read jessie_tcp@dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:37.423: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:37.425: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local from pod dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1: the server could not find the requested resource (get pods dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1) May 26 00:37:37.440: INFO: Lookups using dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1 failed for: [wheezy_udp@dns-test-service.dns-3251.svc.cluster.local wheezy_tcp@dns-test-service.dns-3251.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local jessie_udp@dns-test-service.dns-3251.svc.cluster.local jessie_tcp@dns-test-service.dns-3251.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3251.svc.cluster.local] May 26 00:37:42.443: INFO: DNS probes using dns-3251/dns-test-aa887989-078b-454c-8a01-9bc46e6ee8b1 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:37:43.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3251" for this suite. • [SLOW TEST:37.362 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":195,"skipped":3232,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:37:43.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:37:43.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5aa3ac51-ae87-4a9c-bc77-d39ef241e1a4" in namespace "projected-3366" to be "Succeeded or Failed" May 26 00:37:43.486: INFO: Pod "downwardapi-volume-5aa3ac51-ae87-4a9c-bc77-d39ef241e1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.888458ms May 26 00:37:45.490: INFO: Pod "downwardapi-volume-5aa3ac51-ae87-4a9c-bc77-d39ef241e1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022165701s May 26 00:37:47.494: INFO: Pod "downwardapi-volume-5aa3ac51-ae87-4a9c-bc77-d39ef241e1a4": Phase="Running", Reason="", readiness=true. Elapsed: 4.026162087s May 26 00:37:49.499: INFO: Pod "downwardapi-volume-5aa3ac51-ae87-4a9c-bc77-d39ef241e1a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030686411s STEP: Saw pod success May 26 00:37:49.499: INFO: Pod "downwardapi-volume-5aa3ac51-ae87-4a9c-bc77-d39ef241e1a4" satisfied condition "Succeeded or Failed" May 26 00:37:49.502: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5aa3ac51-ae87-4a9c-bc77-d39ef241e1a4 container client-container: STEP: delete the pod May 26 00:37:49.534: INFO: Waiting for pod downwardapi-volume-5aa3ac51-ae87-4a9c-bc77-d39ef241e1a4 to disappear May 26 00:37:49.543: INFO: Pod downwardapi-volume-5aa3ac51-ae87-4a9c-bc77-d39ef241e1a4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:37:49.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3366" for this suite. • [SLOW TEST:6.188 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":196,"skipped":3243,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:37:49.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 26 00:37:54.233: INFO: Successfully updated pod "annotationupdatead89d23d-1f59-48a8-833e-e2d5e6152390" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:37:56.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9526" for this suite. • [SLOW TEST:6.761 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":197,"skipped":3249,"failed":0} S ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:37:56.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:37:56.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5068" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":198,"skipped":3250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:37:56.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 26 00:37:56.659: INFO: Waiting up to 5m0s for pod "var-expansion-042e32e2-04ff-443a-949f-7d295eb30380" in namespace "var-expansion-2964" to be "Succeeded or Failed" May 26 00:37:56.673: INFO: Pod "var-expansion-042e32e2-04ff-443a-949f-7d295eb30380": Phase="Pending", Reason="", readiness=false. Elapsed: 14.602029ms May 26 00:37:58.723: INFO: Pod "var-expansion-042e32e2-04ff-443a-949f-7d295eb30380": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064715451s May 26 00:38:00.728: INFO: Pod "var-expansion-042e32e2-04ff-443a-949f-7d295eb30380": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069330843s STEP: Saw pod success May 26 00:38:00.728: INFO: Pod "var-expansion-042e32e2-04ff-443a-949f-7d295eb30380" satisfied condition "Succeeded or Failed" May 26 00:38:00.731: INFO: Trying to get logs from node latest-worker2 pod var-expansion-042e32e2-04ff-443a-949f-7d295eb30380 container dapi-container: STEP: delete the pod May 26 00:38:00.831: INFO: Waiting for pod var-expansion-042e32e2-04ff-443a-949f-7d295eb30380 to disappear May 26 00:38:00.835: INFO: Pod var-expansion-042e32e2-04ff-443a-949f-7d295eb30380 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:38:00.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2964" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":199,"skipped":3275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:38:00.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-b3a42f0d-548e-4e85-8236-29ae042bf10d STEP: Creating a pod to test consume configMaps May 26 00:38:00.944: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-589e9574-b1d2-4e49-aebe-539cb1bcfd8f" in namespace "projected-328" to be "Succeeded or Failed" May 26 00:38:00.955: INFO: Pod "pod-projected-configmaps-589e9574-b1d2-4e49-aebe-539cb1bcfd8f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.28572ms May 26 00:38:03.047: INFO: Pod "pod-projected-configmaps-589e9574-b1d2-4e49-aebe-539cb1bcfd8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103084558s May 26 00:38:05.051: INFO: Pod "pod-projected-configmaps-589e9574-b1d2-4e49-aebe-539cb1bcfd8f": Phase="Running", Reason="", readiness=true. Elapsed: 4.107001256s May 26 00:38:07.055: INFO: Pod "pod-projected-configmaps-589e9574-b1d2-4e49-aebe-539cb1bcfd8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111089059s STEP: Saw pod success May 26 00:38:07.055: INFO: Pod "pod-projected-configmaps-589e9574-b1d2-4e49-aebe-539cb1bcfd8f" satisfied condition "Succeeded or Failed" May 26 00:38:07.058: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-589e9574-b1d2-4e49-aebe-539cb1bcfd8f container projected-configmap-volume-test: STEP: delete the pod May 26 00:38:07.092: INFO: Waiting for pod pod-projected-configmaps-589e9574-b1d2-4e49-aebe-539cb1bcfd8f to disappear May 26 00:38:07.106: INFO: Pod pod-projected-configmaps-589e9574-b1d2-4e49-aebe-539cb1bcfd8f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:38:07.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-328" for this suite. • [SLOW TEST:6.269 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":200,"skipped":3334,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:38:07.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 26 00:38:07.193: INFO: Waiting up to 5m0s for pod "client-containers-4764cd23-776f-4cbb-a468-67ad2bbc6ed1" in namespace "containers-995" to be "Succeeded or Failed" May 26 00:38:07.227: INFO: Pod "client-containers-4764cd23-776f-4cbb-a468-67ad2bbc6ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 33.446206ms May 26 00:38:09.230: INFO: Pod "client-containers-4764cd23-776f-4cbb-a468-67ad2bbc6ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037168199s May 26 00:38:11.244: INFO: Pod "client-containers-4764cd23-776f-4cbb-a468-67ad2bbc6ed1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05134506s STEP: Saw pod success May 26 00:38:11.245: INFO: Pod "client-containers-4764cd23-776f-4cbb-a468-67ad2bbc6ed1" satisfied condition "Succeeded or Failed" May 26 00:38:11.247: INFO: Trying to get logs from node latest-worker pod client-containers-4764cd23-776f-4cbb-a468-67ad2bbc6ed1 container test-container: STEP: delete the pod May 26 00:38:11.279: INFO: Waiting for pod client-containers-4764cd23-776f-4cbb-a468-67ad2bbc6ed1 to disappear May 26 00:38:11.290: INFO: Pod client-containers-4764cd23-776f-4cbb-a468-67ad2bbc6ed1 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:38:11.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-995" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":201,"skipped":3337,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:38:11.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8111 STEP: creating service affinity-clusterip-transition in namespace services-8111 STEP: creating replication controller affinity-clusterip-transition in namespace services-8111 I0526 00:38:11.407308 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-8111, replica count: 3 I0526 00:38:14.457820 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:38:17.458095 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 00:38:17.464: INFO: Creating new exec pod May 26 00:38:22.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8111 execpod-affinityc9v7k -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 26 00:38:22.713: INFO: stderr: "I0526 00:38:22.630241 2777 log.go:172] (0xc0009ec0b0) (0xc0005321e0) Create stream\nI0526 00:38:22.630293 2777 log.go:172] (0xc0009ec0b0) (0xc0005321e0) Stream added, broadcasting: 1\nI0526 00:38:22.632306 2777 log.go:172] (0xc0009ec0b0) Reply frame received for 1\nI0526 00:38:22.632363 2777 log.go:172] (0xc0009ec0b0) (0xc00048ad20) Create stream\nI0526 00:38:22.632381 2777 log.go:172] (0xc0009ec0b0) (0xc00048ad20) Stream added, broadcasting: 3\nI0526 00:38:22.633665 2777 log.go:172] (0xc0009ec0b0) Reply frame received for 3\nI0526 00:38:22.633710 2777 log.go:172] (0xc0009ec0b0) (0xc0000dcdc0) Create stream\nI0526 00:38:22.633728 2777 log.go:172] (0xc0009ec0b0) (0xc0000dcdc0) Stream added, broadcasting: 5\nI0526 00:38:22.634596 2777 log.go:172] (0xc0009ec0b0) Reply frame received for 5\nI0526 00:38:22.704355 2777 log.go:172] (0xc0009ec0b0) Data frame received for 5\nI0526 00:38:22.704378 2777 log.go:172] (0xc0000dcdc0) (5) Data frame handling\nI0526 00:38:22.704387 2777 log.go:172] (0xc0000dcdc0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0526 00:38:22.705505 2777 log.go:172] (0xc0009ec0b0) Data frame received for 5\nI0526 00:38:22.705550 2777 log.go:172] (0xc0000dcdc0) (5) Data frame handling\nI0526 00:38:22.705592 2777 log.go:172] (0xc0000dcdc0) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0526 00:38:22.705751 2777 log.go:172] (0xc0009ec0b0) Data frame received for 3\nI0526 00:38:22.705775 2777 log.go:172] (0xc00048ad20) (3) Data frame handling\nI0526 00:38:22.705904 2777 log.go:172] (0xc0009ec0b0) Data frame received for 5\nI0526 00:38:22.705924 2777 log.go:172] (0xc0000dcdc0) (5) Data frame handling\nI0526 00:38:22.707650 2777 log.go:172] (0xc0009ec0b0) Data frame received for 1\nI0526 00:38:22.707678 2777 log.go:172] (0xc0005321e0) (1) Data frame handling\nI0526 00:38:22.707694 2777 log.go:172] (0xc0005321e0) (1) Data frame sent\nI0526 00:38:22.707727 2777 log.go:172] (0xc0009ec0b0) (0xc0005321e0) Stream removed, broadcasting: 1\nI0526 00:38:22.707757 2777 log.go:172] (0xc0009ec0b0) Go away received\nI0526 00:38:22.708197 2777 log.go:172] (0xc0009ec0b0) (0xc0005321e0) Stream removed, broadcasting: 1\nI0526 00:38:22.708222 2777 log.go:172] (0xc0009ec0b0) (0xc00048ad20) Stream removed, broadcasting: 3\nI0526 00:38:22.708234 2777 log.go:172] (0xc0009ec0b0) (0xc0000dcdc0) Stream removed, broadcasting: 5\n" May 26 00:38:22.713: INFO: stdout: "" May 26 00:38:22.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8111 execpod-affinityc9v7k -- /bin/sh -x -c nc -zv -t -w 2 10.107.79.93 80' May 26 00:38:22.918: INFO: stderr: "I0526 00:38:22.841717 2800 log.go:172] (0xc000977080) (0xc000aac320) Create stream\nI0526 00:38:22.841776 2800 log.go:172] (0xc000977080) (0xc000aac320) Stream added, broadcasting: 1\nI0526 00:38:22.846232 2800 log.go:172] (0xc000977080) Reply frame received for 1\nI0526 00:38:22.846293 2800 log.go:172] (0xc000977080) (0xc0006cc640) Create stream\nI0526 00:38:22.846310 2800 log.go:172] (0xc000977080) (0xc0006cc640) Stream added, broadcasting: 3\nI0526 00:38:22.847229 2800 log.go:172] (0xc000977080) Reply frame received for 3\nI0526 00:38:22.847277 2800 log.go:172] (0xc000977080) (0xc0006625a0) Create stream\nI0526 00:38:22.847298 2800 log.go:172] (0xc000977080) (0xc0006625a0) Stream added, broadcasting: 5\nI0526 00:38:22.848199 2800 log.go:172] (0xc000977080) Reply frame received for 5\nI0526 00:38:22.909464 2800 log.go:172] (0xc000977080) Data frame received for 3\nI0526 00:38:22.909511 2800 log.go:172] (0xc0006cc640) (3) Data frame handling\nI0526 00:38:22.909822 2800 log.go:172] (0xc000977080) Data frame received for 5\nI0526 00:38:22.909854 2800 log.go:172] (0xc0006625a0) (5) Data frame handling\nI0526 00:38:22.909879 2800 log.go:172] (0xc0006625a0) (5) Data frame sent\nI0526 00:38:22.909902 2800 log.go:172] (0xc000977080) Data frame received for 5\nI0526 00:38:22.909933 2800 log.go:172] (0xc0006625a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.79.93 80\nConnection to 10.107.79.93 80 port [tcp/http] succeeded!\nI0526 00:38:22.911488 2800 log.go:172] (0xc000977080) Data frame received for 1\nI0526 00:38:22.911517 2800 log.go:172] (0xc000aac320) (1) Data frame handling\nI0526 00:38:22.911536 2800 log.go:172] (0xc000aac320) (1) Data frame sent\nI0526 00:38:22.911552 2800 log.go:172] (0xc000977080) (0xc000aac320) Stream removed, broadcasting: 1\nI0526 00:38:22.911568 2800 log.go:172] (0xc000977080) Go away received\nI0526 00:38:22.912008 2800 log.go:172] (0xc000977080) (0xc000aac320) Stream removed, broadcasting: 1\nI0526 00:38:22.912032 2800 log.go:172] (0xc000977080) (0xc0006cc640) Stream removed, broadcasting: 3\nI0526 00:38:22.912045 2800 log.go:172] (0xc000977080) (0xc0006625a0) Stream removed, broadcasting: 5\n" May 26 00:38:22.918: INFO: stdout: "" May 26 00:38:22.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8111 execpod-affinityc9v7k -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.79.93:80/ ; done' May 26 00:38:23.321: INFO: stderr: "I0526 00:38:23.059913 2821 log.go:172] (0xc000b5d290) (0xc0007a0780) Create stream\nI0526 00:38:23.059974 2821 log.go:172] (0xc000b5d290) (0xc0007a0780) Stream added, broadcasting: 1\nI0526 00:38:23.061947 2821 log.go:172] (0xc000b5d290) Reply frame received for 1\nI0526 00:38:23.061995 2821 log.go:172] (0xc000b5d290) (0xc00042e820) Create stream\nI0526 00:38:23.062010 2821 log.go:172] (0xc000b5d290) (0xc00042e820) Stream added, broadcasting: 3\nI0526 00:38:23.062881 2821 log.go:172] (0xc000b5d290) Reply frame received for 3\nI0526 00:38:23.062922 2821 log.go:172] (0xc000b5d290) (0xc00042f040) Create stream\nI0526 00:38:23.062934 2821 log.go:172] (0xc000b5d290) (0xc00042f040) Stream added, broadcasting: 5\nI0526 00:38:23.063682 2821 log.go:172] (0xc000b5d290) Reply frame received for 5\nI0526 00:38:23.111871 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.111890 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.111898 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.111933 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.111960 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.111979 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.223039 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.223090 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.223132 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.224126 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.224162 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.224185 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.224228 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.224246 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.224265 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.237243 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.237280 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.237305 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.237654 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.237674 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.237682 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.237728 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.237752 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.237773 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.242396 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.242423 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.242444 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.243027 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.243054 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.243083 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.243104 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.243127 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.243153 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.249573 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.249601 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.249631 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.250007 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.250020 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.250031 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.250044 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.250051 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.250061 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.253833 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.253849 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.253859 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.254355 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.254369 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.254376 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.254385 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.254390 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.254395 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.258855 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.258874 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.258883 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.259437 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.259454 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.259461 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.259473 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.259478 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.259489 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.263012 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.263041 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.263063 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.263414 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.263447 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.263460 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.263481 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.263498 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.263511 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.268739 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.268757 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.268771 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.269300 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.269335 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.269373 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.269386 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.269409 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.269429 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.273596 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.273615 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.273630 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.274094 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.274114 2821 log.go:172] (0xc00042f040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.274132 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.274154 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.274175 2821 log.go:172] (0xc00042f040) (5) Data frame sent\nI0526 00:38:23.274199 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.278114 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.278150 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.278174 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.278415 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.278431 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.278448 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.278639 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.278662 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.278684 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.283344 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.283363 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.283383 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.283894 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.283911 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.283923 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.284017 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.284040 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.284059 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.291415 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.291435 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.291447 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.292026 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.292048 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.292065 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.292074 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.292086 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.292093 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.296747 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.296766 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.296779 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.298042 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.298063 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.298082 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.298113 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.298127 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.298137 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.301896 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.301918 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.301936 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.302682 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.302694 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.302701 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.302716 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.302758 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.302785 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.307065 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.307081 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.307092 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.307634 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.307656 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.307669 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.307699 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.307750 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.307792 2821 log.go:172] (0xc00042f040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.312972 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.312998 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.313017 2821 log.go:172] (0xc00042e820) (3) Data frame sent\nI0526 00:38:23.313761 2821 log.go:172] (0xc000b5d290) Data frame received for 3\nI0526 00:38:23.313779 2821 log.go:172] (0xc00042e820) (3) Data frame handling\nI0526 00:38:23.313882 2821 log.go:172] (0xc000b5d290) Data frame received for 5\nI0526 00:38:23.313924 2821 log.go:172] (0xc00042f040) (5) Data frame handling\nI0526 00:38:23.315453 2821 log.go:172] (0xc000b5d290) Data frame received for 1\nI0526 00:38:23.315474 2821 log.go:172] (0xc0007a0780) (1) Data frame handling\nI0526 00:38:23.315487 2821 log.go:172] (0xc0007a0780) (1) Data frame sent\nI0526 00:38:23.315499 2821 log.go:172] (0xc000b5d290) (0xc0007a0780) Stream removed, broadcasting: 1\nI0526 00:38:23.315542 2821 log.go:172] (0xc000b5d290) Go away received\nI0526 00:38:23.315894 2821 log.go:172] (0xc000b5d290) (0xc0007a0780) Stream removed, broadcasting: 1\nI0526 00:38:23.315911 2821 log.go:172] (0xc000b5d290) (0xc00042e820) Stream removed, broadcasting: 3\nI0526 00:38:23.315923 2821 log.go:172] (0xc000b5d290) (0xc00042f040) Stream removed, broadcasting: 5\n" May 26 00:38:23.321: INFO: stdout: "\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-d9sqg\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-d9sqg\naffinity-clusterip-transition-d9sqg\naffinity-clusterip-transition-t4p8t\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-t4p8t\naffinity-clusterip-transition-d9sqg\naffinity-clusterip-transition-t4p8t\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-t4p8t\naffinity-clusterip-transition-t4p8t\naffinity-clusterip-transition-4q9v6" May 26 00:38:23.321: INFO: Received response from host: May 26 00:38:23.321: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.321: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.321: INFO: Received response from host: affinity-clusterip-transition-d9sqg May 26 00:38:23.321: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.321: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.321: INFO: Received response from host: affinity-clusterip-transition-d9sqg May 26 00:38:23.321: INFO: Received response from host: affinity-clusterip-transition-d9sqg May 26 00:38:23.321: INFO: Received response from host: affinity-clusterip-transition-t4p8t May 26 00:38:23.321: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.321: INFO: Received response from host: affinity-clusterip-transition-t4p8t May 26 00:38:23.321: INFO: Received response from host: affinity-clusterip-transition-d9sqg May 26 00:38:23.322: INFO: Received response from host: affinity-clusterip-transition-t4p8t May 26 00:38:23.322: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.322: INFO: Received response from host: affinity-clusterip-transition-t4p8t May 26 00:38:23.322: INFO: Received response from host: affinity-clusterip-transition-t4p8t May 26 00:38:23.322: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8111 execpod-affinityc9v7k -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.79.93:80/ ; done' May 26 00:38:23.620: INFO: stderr: "I0526 00:38:23.470959 2840 log.go:172] (0xc00099f550) (0xc000a04640) Create stream\nI0526 00:38:23.471003 2840 log.go:172] (0xc00099f550) (0xc000a04640) Stream added, broadcasting: 1\nI0526 00:38:23.475404 2840 log.go:172] (0xc00099f550) Reply frame received for 1\nI0526 00:38:23.475432 2840 log.go:172] (0xc00099f550) (0xc00061e280) Create stream\nI0526 00:38:23.475440 2840 log.go:172] (0xc00099f550) (0xc00061e280) Stream added, broadcasting: 3\nI0526 00:38:23.476372 2840 log.go:172] (0xc00099f550) Reply frame received for 3\nI0526 00:38:23.476415 2840 log.go:172] (0xc00099f550) (0xc00061e820) Create stream\nI0526 00:38:23.476429 2840 log.go:172] (0xc00099f550) (0xc00061e820) Stream added, broadcasting: 5\nI0526 00:38:23.477334 2840 log.go:172] (0xc00099f550) Reply frame received for 5\nI0526 00:38:23.534792 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.534813 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.534827 2840 log.go:172] (0xc00061e820) (5) Data frame sent\nI0526 00:38:23.534835 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.534840 2840 log.go:172] (0xc00061e280) (3) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.534850 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.542823 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.542852 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.542876 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.544249 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.544290 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.544312 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.544345 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.544368 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.544386 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.547925 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.547950 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.547968 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.548445 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.548460 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.548475 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.548498 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.548509 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.548524 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.553964 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.553979 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.553987 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.554496 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.554513 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.554520 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.554531 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.554535 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.554541 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.558993 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.559005 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.559011 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.559669 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.559678 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.559684 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.559700 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.559720 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.559733 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.563076 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.563086 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.563092 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.563682 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.563701 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.563711 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.563727 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.563737 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.563748 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.566861 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.566873 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.566879 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.567554 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.567566 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.567575 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.567599 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.567624 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.567642 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.570607 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.570628 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.570652 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.570996 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.571016 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.571038 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.571214 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.571238 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.571270 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.577374 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.577481 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.577527 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.577773 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.577803 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.577814 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.577826 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.577874 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.577900 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.581319 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.581394 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.581446 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.582245 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.582270 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.582296 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.582387 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.582403 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.582424 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.586075 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.586099 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.586117 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.586603 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.586633 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.586654 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.586674 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.586686 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.586697 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.590449 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.590468 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.590478 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.590916 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.590945 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.590961 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.590987 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.590999 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.591012 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.595274 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.595295 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.595313 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.595714 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.595740 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.595751 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.595765 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.595772 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.595779 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.599329 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.599342 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.599353 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.599642 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.599668 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.599684 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.599705 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.599721 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.599742 2840 log.go:172] (0xc00061e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.604387 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.604413 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.604437 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.605002 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.605014 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.605025 2840 log.go:172] (0xc00061e820) (5) Data frame sent\nI0526 00:38:23.605031 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.605037 2840 log.go:172] (0xc00061e820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.605053 2840 log.go:172] (0xc00061e820) (5) Data frame sent\nI0526 00:38:23.605092 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.605296 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.605331 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.608754 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.608778 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.608795 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.609414 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.609443 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.609462 2840 log.go:172] (0xc00061e820) (5) Data frame sent\nI0526 00:38:23.609473 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.609485 2840 log.go:172] (0xc00061e820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.79.93:80/\nI0526 00:38:23.609506 2840 log.go:172] (0xc00061e820) (5) Data frame sent\nI0526 00:38:23.609558 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.609591 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.609625 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.613255 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.613281 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.613291 2840 log.go:172] (0xc00061e280) (3) Data frame sent\nI0526 00:38:23.613927 2840 log.go:172] (0xc00099f550) Data frame received for 3\nI0526 00:38:23.613957 2840 log.go:172] (0xc00061e280) (3) Data frame handling\nI0526 00:38:23.614151 2840 log.go:172] (0xc00099f550) Data frame received for 5\nI0526 00:38:23.614176 2840 log.go:172] (0xc00061e820) (5) Data frame handling\nI0526 00:38:23.615949 2840 log.go:172] (0xc00099f550) Data frame received for 1\nI0526 00:38:23.615971 2840 log.go:172] (0xc000a04640) (1) Data frame handling\nI0526 00:38:23.616000 2840 log.go:172] (0xc000a04640) (1) Data frame sent\nI0526 00:38:23.616020 2840 log.go:172] (0xc00099f550) (0xc000a04640) Stream removed, broadcasting: 1\nI0526 00:38:23.616079 2840 log.go:172] (0xc00099f550) Go away received\nI0526 00:38:23.616316 2840 log.go:172] (0xc00099f550) (0xc000a04640) Stream removed, broadcasting: 1\nI0526 00:38:23.616339 2840 log.go:172] (0xc00099f550) (0xc00061e280) Stream removed, broadcasting: 3\nI0526 00:38:23.616352 2840 log.go:172] (0xc00099f550) (0xc00061e820) Stream removed, broadcasting: 5\n" May 26 00:38:23.621: INFO: stdout: "\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6\naffinity-clusterip-transition-4q9v6" May 26 00:38:23.621: INFO: Received response from host: May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Received response from host: affinity-clusterip-transition-4q9v6 May 26 00:38:23.621: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-8111, will wait for the garbage collector to delete the pods May 26 00:38:23.738: INFO: Deleting ReplicationController affinity-clusterip-transition took: 17.440655ms May 26 00:38:24.339: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 600.266911ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:38:35.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8111" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:24.089 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":202,"skipped":3343,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:38:35.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-25a70a30-2dc6-406f-a3ae-d77301aa7995 STEP: Creating a pod to test consume configMaps May 26 00:38:35.498: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f4ba076-2f2a-4215-8d1b-74a88974f6cf" in namespace "configmap-8413" to be "Succeeded or Failed" May 26 00:38:35.501: INFO: Pod "pod-configmaps-9f4ba076-2f2a-4215-8d1b-74a88974f6cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.734045ms May 26 00:38:37.505: INFO: Pod "pod-configmaps-9f4ba076-2f2a-4215-8d1b-74a88974f6cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007153052s May 26 00:38:39.510: INFO: Pod "pod-configmaps-9f4ba076-2f2a-4215-8d1b-74a88974f6cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011888027s STEP: Saw pod success May 26 00:38:39.510: INFO: Pod "pod-configmaps-9f4ba076-2f2a-4215-8d1b-74a88974f6cf" satisfied condition "Succeeded or Failed" May 26 00:38:39.514: INFO: Trying to get logs from node latest-worker pod pod-configmaps-9f4ba076-2f2a-4215-8d1b-74a88974f6cf container configmap-volume-test: STEP: delete the pod May 26 00:38:39.547: INFO: Waiting for pod pod-configmaps-9f4ba076-2f2a-4215-8d1b-74a88974f6cf to disappear May 26 00:38:39.557: INFO: Pod pod-configmaps-9f4ba076-2f2a-4215-8d1b-74a88974f6cf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:38:39.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8413" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":203,"skipped":3355,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:38:39.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 26 00:38:39.625: INFO: PodSpec: initContainers in spec.initContainers May 26 00:39:28.928: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e06bcbe1-3a9b-41b8-b260-1599f5fb303b", GenerateName:"", Namespace:"init-container-9356", SelfLink:"/api/v1/namespaces/init-container-9356/pods/pod-init-e06bcbe1-3a9b-41b8-b260-1599f5fb303b", UID:"1bc3bb28-f867-40ca-aeb1-0f4dd5f48e33", ResourceVersion:"7692080", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726050319, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"625957618"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00262e0e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00262e120)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00262e160), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00262e1a0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-n5ghn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0003e6400), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n5ghn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n5ghn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n5ghn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005466098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002508000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005466130)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005466150)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005466158), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00546615c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726050319, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726050319, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726050319, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726050319, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.199", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.199"}}, StartTime:(*v1.Time)(0xc00262e1e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025081c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002508230)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://1bd8f42736def25d6f613eec090dd97e86f94ed8364bc995432accb99c2b03f7", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00262e260), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00262e220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0054661df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:39:28.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9356" for this suite. • [SLOW TEST:49.414 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":204,"skipped":3362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:39:28.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:39:45.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-965" for this suite. • [SLOW TEST:16.128 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":205,"skipped":3425,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:39:45.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-9c29b7a2-6b05-4fb8-a560-c5bbd9d0610c in namespace container-probe-1196 May 26 00:39:49.235: INFO: Started pod test-webserver-9c29b7a2-6b05-4fb8-a560-c5bbd9d0610c in namespace container-probe-1196 STEP: checking the pod's current state and verifying that restartCount is present May 26 00:39:49.239: INFO: Initial restart count of pod test-webserver-9c29b7a2-6b05-4fb8-a560-c5bbd9d0610c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:43:50.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1196" for this suite. • [SLOW TEST:245.051 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":206,"skipped":3435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:43:50.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:43:54.519: INFO: Waiting up to 5m0s for pod "client-envvars-b4c4ddbb-725c-427f-86de-233c65ca497f" in namespace "pods-3634" to be "Succeeded or Failed" May 26 00:43:54.618: INFO: Pod "client-envvars-b4c4ddbb-725c-427f-86de-233c65ca497f": Phase="Pending", Reason="", readiness=false. Elapsed: 99.145538ms May 26 00:43:56.623: INFO: Pod "client-envvars-b4c4ddbb-725c-427f-86de-233c65ca497f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103706292s May 26 00:43:58.627: INFO: Pod "client-envvars-b4c4ddbb-725c-427f-86de-233c65ca497f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107274398s STEP: Saw pod success May 26 00:43:58.627: INFO: Pod "client-envvars-b4c4ddbb-725c-427f-86de-233c65ca497f" satisfied condition "Succeeded or Failed" May 26 00:43:58.630: INFO: Trying to get logs from node latest-worker pod client-envvars-b4c4ddbb-725c-427f-86de-233c65ca497f container env3cont: STEP: delete the pod May 26 00:43:58.676: INFO: Waiting for pod client-envvars-b4c4ddbb-725c-427f-86de-233c65ca497f to disappear May 26 00:43:58.699: INFO: Pod client-envvars-b4c4ddbb-725c-427f-86de-233c65ca497f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:43:58.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3634" for this suite. • [SLOW TEST:8.545 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":207,"skipped":3484,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:43:58.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-854ec47d-aa8a-41e1-8a7b-23e9eec566ec STEP: Creating a pod to test consume secrets May 26 00:43:58.839: INFO: Waiting up to 5m0s for pod "pod-secrets-4d5428e3-0dca-4a7c-b6ab-e9cb31353c68" in namespace "secrets-990" to be "Succeeded or Failed" May 26 00:43:58.842: INFO: Pod "pod-secrets-4d5428e3-0dca-4a7c-b6ab-e9cb31353c68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.664254ms May 26 00:44:00.846: INFO: Pod "pod-secrets-4d5428e3-0dca-4a7c-b6ab-e9cb31353c68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006567783s May 26 00:44:02.858: INFO: Pod "pod-secrets-4d5428e3-0dca-4a7c-b6ab-e9cb31353c68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018680075s STEP: Saw pod success May 26 00:44:02.858: INFO: Pod "pod-secrets-4d5428e3-0dca-4a7c-b6ab-e9cb31353c68" satisfied condition "Succeeded or Failed" May 26 00:44:02.860: INFO: Trying to get logs from node latest-worker pod pod-secrets-4d5428e3-0dca-4a7c-b6ab-e9cb31353c68 container secret-volume-test: STEP: delete the pod May 26 00:44:02.918: INFO: Waiting for pod pod-secrets-4d5428e3-0dca-4a7c-b6ab-e9cb31353c68 to disappear May 26 00:44:02.930: INFO: Pod pod-secrets-4d5428e3-0dca-4a7c-b6ab-e9cb31353c68 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:44:02.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-990" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":208,"skipped":3486,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:44:02.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 26 00:44:03.475: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 26 00:44:05.484: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726050643, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726050643, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726050643, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726050643, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 00:44:08.518: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:44:08.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:44:09.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3862" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.007 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":209,"skipped":3500,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:44:09.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 26 00:44:09.983: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 26 00:44:21.544: INFO: >>> kubeConfig: /root/.kube/config May 26 00:44:23.503: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:44:34.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2605" for this suite. • [SLOW TEST:24.243 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":210,"skipped":3511,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:44:34.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 26 00:44:34.236: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 00:44:34.249: INFO: Waiting for terminating namespaces to be deleted... May 26 00:44:34.251: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 26 00:44:34.256: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 26 00:44:34.256: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 26 00:44:34.256: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 26 00:44:34.256: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 26 00:44:34.256: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 00:44:34.256: INFO: Container kindnet-cni ready: true, restart count 0 May 26 00:44:34.256: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 00:44:34.256: INFO: Container kube-proxy ready: true, restart count 0 May 26 00:44:34.256: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 26 00:44:34.260: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 26 00:44:34.260: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 26 00:44:34.260: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 26 00:44:34.260: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 26 00:44:34.260: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 00:44:34.260: INFO: Container kindnet-cni ready: true, restart count 0 May 26 00:44:34.260: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 00:44:34.260: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5938ef99-5d20-4343-983a-3bb69cd770ea 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-5938ef99-5d20-4343-983a-3bb69cd770ea off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5938ef99-5d20-4343-983a-3bb69cd770ea [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:44:42.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7925" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.359 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":211,"skipped":3511,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:44:42.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-7a5dff5e-7aa3-4769-8577-05615a6e17ad STEP: Creating a pod to test consume configMaps May 26 00:44:42.681: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc45a569-edf5-4356-80ad-6c3428836dcf" in namespace "configmap-915" to be "Succeeded or Failed" May 26 00:44:42.688: INFO: Pod "pod-configmaps-bc45a569-edf5-4356-80ad-6c3428836dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.88762ms May 26 00:44:44.697: INFO: Pod "pod-configmaps-bc45a569-edf5-4356-80ad-6c3428836dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015345418s May 26 00:44:46.701: INFO: Pod "pod-configmaps-bc45a569-edf5-4356-80ad-6c3428836dcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019543577s STEP: Saw pod success May 26 00:44:46.701: INFO: Pod "pod-configmaps-bc45a569-edf5-4356-80ad-6c3428836dcf" satisfied condition "Succeeded or Failed" May 26 00:44:46.703: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-bc45a569-edf5-4356-80ad-6c3428836dcf container configmap-volume-test: STEP: delete the pod May 26 00:44:46.742: INFO: Waiting for pod pod-configmaps-bc45a569-edf5-4356-80ad-6c3428836dcf to disappear May 26 00:44:46.766: INFO: Pod pod-configmaps-bc45a569-edf5-4356-80ad-6c3428836dcf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:44:46.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-915" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":212,"skipped":3521,"failed":0} ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:44:46.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3407.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3407.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 00:44:53.025: INFO: DNS probes using dns-3407/dns-test-71c63aa2-58bf-4307-bb2a-f7671f3a5d56 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:44:53.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3407" for this suite. • [SLOW TEST:6.375 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":213,"skipped":3521,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:44:53.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4100 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4100 I0526 00:44:53.930024 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4100, replica count: 2 I0526 00:44:56.980516 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:44:59.980776 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 00:44:59.980: INFO: Creating new exec pod May 26 00:45:05.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4100 execpodxtfxz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 26 00:45:05.250: INFO: stderr: "I0526 00:45:05.168516 2862 log.go:172] (0xc0007f4840) (0xc0006ecaa0) Create stream\nI0526 00:45:05.168587 2862 log.go:172] (0xc0007f4840) (0xc0006ecaa0) Stream added, broadcasting: 1\nI0526 00:45:05.172206 2862 log.go:172] (0xc0007f4840) Reply frame received for 1\nI0526 00:45:05.172250 2862 log.go:172] (0xc0007f4840) (0xc000a5e000) Create stream\nI0526 00:45:05.172297 2862 log.go:172] (0xc0007f4840) (0xc000a5e000) Stream added, broadcasting: 3\nI0526 00:45:05.173563 2862 log.go:172] (0xc0007f4840) Reply frame received for 3\nI0526 00:45:05.173612 2862 log.go:172] (0xc0007f4840) (0xc0006dad20) Create stream\nI0526 00:45:05.173627 2862 log.go:172] (0xc0007f4840) (0xc0006dad20) Stream added, broadcasting: 5\nI0526 00:45:05.174723 2862 log.go:172] (0xc0007f4840) Reply frame received for 5\nI0526 00:45:05.226616 2862 log.go:172] (0xc0007f4840) Data frame received for 5\nI0526 00:45:05.226637 2862 log.go:172] (0xc0006dad20) (5) Data frame handling\nI0526 00:45:05.226651 2862 log.go:172] (0xc0006dad20) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0526 00:45:05.244752 2862 log.go:172] (0xc0007f4840) Data frame received for 5\nI0526 00:45:05.244791 2862 log.go:172] (0xc0006dad20) (5) Data frame handling\nI0526 00:45:05.244822 2862 log.go:172] (0xc0006dad20) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0526 00:45:05.245241 2862 log.go:172] (0xc0007f4840) Data frame received for 5\nI0526 00:45:05.245290 2862 log.go:172] (0xc0006dad20) (5) Data frame handling\nI0526 00:45:05.245321 2862 log.go:172] (0xc0007f4840) Data frame received for 3\nI0526 00:45:05.245340 2862 log.go:172] (0xc000a5e000) (3) Data frame handling\nI0526 00:45:05.246892 2862 log.go:172] (0xc0007f4840) Data frame received for 1\nI0526 00:45:05.246911 2862 log.go:172] (0xc0006ecaa0) (1) Data frame handling\nI0526 00:45:05.246932 2862 log.go:172] (0xc0006ecaa0) (1) Data frame sent\nI0526 00:45:05.246944 2862 log.go:172] (0xc0007f4840) (0xc0006ecaa0) Stream removed, broadcasting: 1\nI0526 00:45:05.246961 2862 log.go:172] (0xc0007f4840) Go away received\nI0526 00:45:05.247236 2862 log.go:172] (0xc0007f4840) (0xc0006ecaa0) Stream removed, broadcasting: 1\nI0526 00:45:05.247249 2862 log.go:172] (0xc0007f4840) (0xc000a5e000) Stream removed, broadcasting: 3\nI0526 00:45:05.247255 2862 log.go:172] (0xc0007f4840) (0xc0006dad20) Stream removed, broadcasting: 5\n" May 26 00:45:05.251: INFO: stdout: "" May 26 00:45:05.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4100 execpodxtfxz -- /bin/sh -x -c nc -zv -t -w 2 10.103.18.177 80' May 26 00:45:05.475: INFO: stderr: "I0526 00:45:05.404257 2884 log.go:172] (0xc000b4d130) (0xc000b26140) Create stream\nI0526 00:45:05.404314 2884 log.go:172] (0xc000b4d130) (0xc000b26140) Stream added, broadcasting: 1\nI0526 00:45:05.409590 2884 log.go:172] (0xc000b4d130) Reply frame received for 1\nI0526 00:45:05.409636 2884 log.go:172] (0xc000b4d130) (0xc0004d25a0) Create stream\nI0526 00:45:05.409646 2884 log.go:172] (0xc000b4d130) (0xc0004d25a0) Stream added, broadcasting: 3\nI0526 00:45:05.410541 2884 log.go:172] (0xc000b4d130) Reply frame received for 3\nI0526 00:45:05.410579 2884 log.go:172] (0xc000b4d130) (0xc0004d2aa0) Create stream\nI0526 00:45:05.410597 2884 log.go:172] (0xc000b4d130) (0xc0004d2aa0) Stream added, broadcasting: 5\nI0526 00:45:05.411365 2884 log.go:172] (0xc000b4d130) Reply frame received for 5\nI0526 00:45:05.468184 2884 log.go:172] (0xc000b4d130) Data frame received for 3\nI0526 00:45:05.468222 2884 log.go:172] (0xc0004d25a0) (3) Data frame handling\nI0526 00:45:05.468261 2884 log.go:172] (0xc000b4d130) Data frame received for 5\nI0526 00:45:05.468298 2884 log.go:172] (0xc0004d2aa0) (5) Data frame handling\nI0526 00:45:05.468321 2884 log.go:172] (0xc0004d2aa0) (5) Data frame sent\nI0526 00:45:05.468339 2884 log.go:172] (0xc000b4d130) Data frame received for 5\nI0526 00:45:05.468354 2884 log.go:172] (0xc0004d2aa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.18.177 80\nConnection to 10.103.18.177 80 port [tcp/http] succeeded!\nI0526 00:45:05.469738 2884 log.go:172] (0xc000b4d130) Data frame received for 1\nI0526 00:45:05.469787 2884 log.go:172] (0xc000b26140) (1) Data frame handling\nI0526 00:45:05.469820 2884 log.go:172] (0xc000b26140) (1) Data frame sent\nI0526 00:45:05.469855 2884 log.go:172] (0xc000b4d130) (0xc000b26140) Stream removed, broadcasting: 1\nI0526 00:45:05.469888 2884 log.go:172] (0xc000b4d130) Go away received\nI0526 00:45:05.470138 2884 log.go:172] (0xc000b4d130) (0xc000b26140) Stream removed, broadcasting: 1\nI0526 00:45:05.470152 2884 log.go:172] (0xc000b4d130) (0xc0004d25a0) Stream removed, broadcasting: 3\nI0526 00:45:05.470161 2884 log.go:172] (0xc000b4d130) (0xc0004d2aa0) Stream removed, broadcasting: 5\n" May 26 00:45:05.475: INFO: stdout: "" May 26 00:45:05.475: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:45:05.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4100" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.337 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":214,"skipped":3527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:45:05.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:45:05.657: INFO: The status of Pod test-webserver-ef3c29da-0662-48d4-a7c8-b4cd107b550d is Pending, waiting for it to be Running (with Ready = true) May 26 00:45:07.798: INFO: The status of Pod test-webserver-ef3c29da-0662-48d4-a7c8-b4cd107b550d is Pending, waiting for it to be Running (with Ready = true) May 26 00:45:09.662: INFO: The status of Pod test-webserver-ef3c29da-0662-48d4-a7c8-b4cd107b550d is Running (Ready = false) May 26 00:45:11.662: INFO: The status of Pod test-webserver-ef3c29da-0662-48d4-a7c8-b4cd107b550d is Running (Ready = false) May 26 00:45:13.679: INFO: The status of Pod test-webserver-ef3c29da-0662-48d4-a7c8-b4cd107b550d is Running (Ready = false) May 26 00:45:15.662: INFO: The status of Pod test-webserver-ef3c29da-0662-48d4-a7c8-b4cd107b550d is Running (Ready = false) May 26 00:45:17.662: INFO: The status of Pod test-webserver-ef3c29da-0662-48d4-a7c8-b4cd107b550d is Running (Ready = false) May 26 00:45:19.666: INFO: The status of Pod test-webserver-ef3c29da-0662-48d4-a7c8-b4cd107b550d is Running (Ready = false) May 26 00:45:21.660: INFO: The status of Pod test-webserver-ef3c29da-0662-48d4-a7c8-b4cd107b550d is Running (Ready = false) May 26 00:45:23.662: INFO: The status of Pod test-webserver-ef3c29da-0662-48d4-a7c8-b4cd107b550d is Running (Ready = false) May 26 00:45:25.662: INFO: The status of Pod test-webserver-ef3c29da-0662-48d4-a7c8-b4cd107b550d is Running (Ready = true) May 26 00:45:25.666: INFO: Container started at 2020-05-26 00:45:08 +0000 UTC, pod became ready at 2020-05-26 00:45:24 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:45:25.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-896" for this suite. • [SLOW TEST:20.138 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":215,"skipped":3552,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:45:25.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:45:25.723: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3155 I0526 00:45:25.739565 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3155, replica count: 1 I0526 00:45:26.790023 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:45:27.790298 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:45:28.790530 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:45:29.790807 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 00:45:29.906: INFO: Created: latency-svc-mp7tz May 26 00:45:29.956: INFO: Got endpoints: latency-svc-mp7tz [65.43375ms] May 26 00:45:30.153: INFO: Created: latency-svc-d6sct May 26 00:45:30.177: INFO: Got endpoints: latency-svc-d6sct [220.440724ms] May 26 00:45:30.177: INFO: Created: latency-svc-nrvq9 May 26 00:45:30.201: INFO: Got endpoints: latency-svc-nrvq9 [244.317693ms] May 26 00:45:30.230: INFO: Created: latency-svc-m6szl May 26 00:45:30.247: INFO: Got endpoints: latency-svc-m6szl [290.081419ms] May 26 00:45:30.302: INFO: Created: latency-svc-spbpw May 26 00:45:30.327: INFO: Got endpoints: latency-svc-spbpw [370.158724ms] May 26 00:45:30.327: INFO: Created: latency-svc-bzg8r May 26 00:45:30.350: INFO: Got endpoints: latency-svc-bzg8r [393.606576ms] May 26 00:45:30.380: INFO: Created: latency-svc-9ldhl May 26 00:45:30.440: INFO: Got endpoints: latency-svc-9ldhl [483.535974ms] May 26 00:45:30.454: INFO: Created: latency-svc-k9lx6 May 26 00:45:30.463: INFO: Got endpoints: latency-svc-k9lx6 [506.987517ms] May 26 00:45:30.483: INFO: Created: latency-svc-l6gt2 May 26 00:45:30.493: INFO: Got endpoints: latency-svc-l6gt2 [536.907424ms] May 26 00:45:30.524: INFO: Created: latency-svc-wkwdm May 26 00:45:30.614: INFO: Got endpoints: latency-svc-wkwdm [657.18571ms] May 26 00:45:30.620: INFO: Created: latency-svc-thf7d May 26 00:45:30.644: INFO: Got endpoints: latency-svc-thf7d [687.766661ms] May 26 00:45:30.698: INFO: Created: latency-svc-zfkh2 May 26 00:45:30.801: INFO: Got endpoints: latency-svc-zfkh2 [844.459308ms] May 26 00:45:30.805: INFO: Created: latency-svc-98rtx May 26 00:45:30.814: INFO: Got endpoints: latency-svc-98rtx [857.409865ms] May 26 00:45:30.836: INFO: Created: latency-svc-5hs9q May 26 00:45:30.872: INFO: Got endpoints: latency-svc-5hs9q [915.404803ms] May 26 00:45:30.961: INFO: Created: latency-svc-r82rw May 26 00:45:30.971: INFO: Got endpoints: latency-svc-r82rw [1.014597647s] May 26 00:45:31.011: INFO: Created: latency-svc-rprmb May 26 00:45:31.046: INFO: Got endpoints: latency-svc-rprmb [1.090183926s] May 26 00:45:31.112: INFO: Created: latency-svc-z68vg May 26 00:45:31.136: INFO: Got endpoints: latency-svc-z68vg [959.0749ms] May 26 00:45:31.178: INFO: Created: latency-svc-rtqkm May 26 00:45:31.260: INFO: Got endpoints: latency-svc-rtqkm [1.059072215s] May 26 00:45:31.292: INFO: Created: latency-svc-l6gxl May 26 00:45:31.328: INFO: Got endpoints: latency-svc-l6gxl [1.081290931s] May 26 00:45:31.398: INFO: Created: latency-svc-mglmw May 26 00:45:31.412: INFO: Got endpoints: latency-svc-mglmw [1.085089624s] May 26 00:45:31.449: INFO: Created: latency-svc-76mht May 26 00:45:31.466: INFO: Got endpoints: latency-svc-76mht [1.116103984s] May 26 00:45:31.542: INFO: Created: latency-svc-xvrz9 May 26 00:45:31.556: INFO: Got endpoints: latency-svc-xvrz9 [1.116044958s] May 26 00:45:31.586: INFO: Created: latency-svc-99mxf May 26 00:45:31.604: INFO: Got endpoints: latency-svc-99mxf [1.140729444s] May 26 00:45:31.634: INFO: Created: latency-svc-nqtxv May 26 00:45:31.679: INFO: Got endpoints: latency-svc-nqtxv [1.186002722s] May 26 00:45:31.700: INFO: Created: latency-svc-d4v7z May 26 00:45:31.715: INFO: Got endpoints: latency-svc-d4v7z [1.101749164s] May 26 00:45:31.736: INFO: Created: latency-svc-l22cz May 26 00:45:31.746: INFO: Got endpoints: latency-svc-l22cz [1.101504231s] May 26 00:45:31.779: INFO: Created: latency-svc-mmcpv May 26 00:45:31.881: INFO: Got endpoints: latency-svc-mmcpv [1.080094053s] May 26 00:45:31.882: INFO: Created: latency-svc-ks4t9 May 26 00:45:31.911: INFO: Got endpoints: latency-svc-ks4t9 [1.096719046s] May 26 00:45:31.934: INFO: Created: latency-svc-6q9cz May 26 00:45:32.002: INFO: Got endpoints: latency-svc-6q9cz [1.130452901s] May 26 00:45:32.022: INFO: Created: latency-svc-4rcdv May 26 00:45:32.032: INFO: Got endpoints: latency-svc-4rcdv [1.0604942s] May 26 00:45:32.054: INFO: Created: latency-svc-cf5rl May 26 00:45:32.068: INFO: Got endpoints: latency-svc-cf5rl [1.021006034s] May 26 00:45:32.184: INFO: Created: latency-svc-r4jsw May 26 00:45:32.193: INFO: Got endpoints: latency-svc-r4jsw [1.056977313s] May 26 00:45:32.210: INFO: Created: latency-svc-htv4z May 26 00:45:32.246: INFO: Got endpoints: latency-svc-htv4z [986.080404ms] May 26 00:45:32.345: INFO: Created: latency-svc-xzfc2 May 26 00:45:32.350: INFO: Got endpoints: latency-svc-xzfc2 [1.021722471s] May 26 00:45:32.378: INFO: Created: latency-svc-mclzr May 26 00:45:32.391: INFO: Got endpoints: latency-svc-mclzr [979.456184ms] May 26 00:45:32.408: INFO: Created: latency-svc-fdvc8 May 26 00:45:32.422: INFO: Got endpoints: latency-svc-fdvc8 [955.850924ms] May 26 00:45:32.438: INFO: Created: latency-svc-x8p9b May 26 00:45:32.500: INFO: Got endpoints: latency-svc-x8p9b [944.090439ms] May 26 00:45:32.516: INFO: Created: latency-svc-wzjfh May 26 00:45:32.533: INFO: Got endpoints: latency-svc-wzjfh [928.734298ms] May 26 00:45:32.564: INFO: Created: latency-svc-b8s9x May 26 00:45:32.578: INFO: Got endpoints: latency-svc-b8s9x [898.76426ms] May 26 00:45:32.643: INFO: Created: latency-svc-hx9ks May 26 00:45:32.672: INFO: Created: latency-svc-6zmkz May 26 00:45:32.673: INFO: Got endpoints: latency-svc-hx9ks [957.718718ms] May 26 00:45:32.726: INFO: Got endpoints: latency-svc-6zmkz [980.256179ms] May 26 00:45:32.822: INFO: Created: latency-svc-zhchp May 26 00:45:32.847: INFO: Got endpoints: latency-svc-zhchp [966.117095ms] May 26 00:45:32.900: INFO: Created: latency-svc-pqlzt May 26 00:45:32.955: INFO: Got endpoints: latency-svc-pqlzt [1.044766079s] May 26 00:45:33.002: INFO: Created: latency-svc-5zpmc May 26 00:45:33.032: INFO: Got endpoints: latency-svc-5zpmc [1.029934076s] May 26 00:45:33.117: INFO: Created: latency-svc-pbdsl May 26 00:45:33.124: INFO: Got endpoints: latency-svc-pbdsl [1.092199765s] May 26 00:45:33.188: INFO: Created: latency-svc-t6s5j May 26 00:45:33.209: INFO: Got endpoints: latency-svc-t6s5j [1.141786229s] May 26 00:45:33.308: INFO: Created: latency-svc-2v892 May 26 00:45:33.329: INFO: Got endpoints: latency-svc-2v892 [1.136065858s] May 26 00:45:33.368: INFO: Created: latency-svc-vdx76 May 26 00:45:33.434: INFO: Got endpoints: latency-svc-vdx76 [1.187428902s] May 26 00:45:33.436: INFO: Created: latency-svc-gbjvl May 26 00:45:33.458: INFO: Got endpoints: latency-svc-gbjvl [1.108080077s] May 26 00:45:33.488: INFO: Created: latency-svc-x749z May 26 00:45:33.506: INFO: Got endpoints: latency-svc-x749z [1.11461641s] May 26 00:45:33.531: INFO: Created: latency-svc-t29j5 May 26 00:45:33.589: INFO: Got endpoints: latency-svc-t29j5 [1.167052969s] May 26 00:45:33.592: INFO: Created: latency-svc-6kcmb May 26 00:45:33.599: INFO: Got endpoints: latency-svc-6kcmb [1.099440932s] May 26 00:45:33.619: INFO: Created: latency-svc-w89c8 May 26 00:45:33.644: INFO: Got endpoints: latency-svc-w89c8 [1.110848756s] May 26 00:45:33.674: INFO: Created: latency-svc-z2bmh May 26 00:45:33.684: INFO: Got endpoints: latency-svc-z2bmh [1.105977721s] May 26 00:45:33.739: INFO: Created: latency-svc-hfvqf May 26 00:45:33.743: INFO: Got endpoints: latency-svc-hfvqf [1.069913521s] May 26 00:45:33.764: INFO: Created: latency-svc-tc87c May 26 00:45:33.799: INFO: Got endpoints: latency-svc-tc87c [1.073611593s] May 26 00:45:33.907: INFO: Created: latency-svc-vdbg2 May 26 00:45:33.938: INFO: Got endpoints: latency-svc-vdbg2 [1.090516237s] May 26 00:45:33.961: INFO: Created: latency-svc-sqpc8 May 26 00:45:33.986: INFO: Got endpoints: latency-svc-sqpc8 [1.03027361s] May 26 00:45:34.153: INFO: Created: latency-svc-9bhj8 May 26 00:45:34.417: INFO: Got endpoints: latency-svc-9bhj8 [1.384439132s] May 26 00:45:34.448: INFO: Created: latency-svc-dw79c May 26 00:45:34.454: INFO: Got endpoints: latency-svc-dw79c [1.330126086s] May 26 00:45:34.508: INFO: Created: latency-svc-fqwm4 May 26 00:45:34.551: INFO: Got endpoints: latency-svc-fqwm4 [1.341142024s] May 26 00:45:34.833: INFO: Created: latency-svc-lfh67 May 26 00:45:34.985: INFO: Got endpoints: latency-svc-lfh67 [1.655863557s] May 26 00:45:35.024: INFO: Created: latency-svc-fqs6j May 26 00:45:35.068: INFO: Got endpoints: latency-svc-fqs6j [1.634355084s] May 26 00:45:35.069: INFO: Created: latency-svc-5jtg5 May 26 00:45:35.146: INFO: Got endpoints: latency-svc-5jtg5 [1.688272355s] May 26 00:45:35.174: INFO: Created: latency-svc-8fgfg May 26 00:45:35.188: INFO: Got endpoints: latency-svc-8fgfg [1.681810759s] May 26 00:45:35.204: INFO: Created: latency-svc-jlx4k May 26 00:45:35.240: INFO: Got endpoints: latency-svc-jlx4k [1.650949869s] May 26 00:45:35.320: INFO: Created: latency-svc-qx7bv May 26 00:45:35.332: INFO: Got endpoints: latency-svc-qx7bv [1.732948059s] May 26 00:45:35.354: INFO: Created: latency-svc-n78cf May 26 00:45:35.369: INFO: Got endpoints: latency-svc-n78cf [1.725162373s] May 26 00:45:35.390: INFO: Created: latency-svc-b47d2 May 26 00:45:35.399: INFO: Got endpoints: latency-svc-b47d2 [1.714557214s] May 26 00:45:35.470: INFO: Created: latency-svc-r4gcr May 26 00:45:35.499: INFO: Got endpoints: latency-svc-r4gcr [1.755665166s] May 26 00:45:35.499: INFO: Created: latency-svc-xp27s May 26 00:45:35.535: INFO: Got endpoints: latency-svc-xp27s [1.734992196s] May 26 00:45:35.619: INFO: Created: latency-svc-mbkbn May 26 00:45:35.648: INFO: Got endpoints: latency-svc-mbkbn [1.709876745s] May 26 00:45:35.648: INFO: Created: latency-svc-gpwkr May 26 00:45:35.658: INFO: Got endpoints: latency-svc-gpwkr [1.672435847s] May 26 00:45:35.716: INFO: Created: latency-svc-d5vjx May 26 00:45:35.775: INFO: Got endpoints: latency-svc-d5vjx [1.357757312s] May 26 00:45:35.777: INFO: Created: latency-svc-h6q9t May 26 00:45:35.798: INFO: Got endpoints: latency-svc-h6q9t [1.343790476s] May 26 00:45:35.858: INFO: Created: latency-svc-j9rk5 May 26 00:45:35.973: INFO: Got endpoints: latency-svc-j9rk5 [1.422174778s] May 26 00:45:35.975: INFO: Created: latency-svc-8vf5p May 26 00:45:35.983: INFO: Got endpoints: latency-svc-8vf5p [998.306798ms] May 26 00:45:36.008: INFO: Created: latency-svc-m5vsh May 26 00:45:36.050: INFO: Got endpoints: latency-svc-m5vsh [981.893399ms] May 26 00:45:36.152: INFO: Created: latency-svc-mbmnz May 26 00:45:36.206: INFO: Got endpoints: latency-svc-mbmnz [1.059808214s] May 26 00:45:36.207: INFO: Created: latency-svc-qm6fd May 26 00:45:36.230: INFO: Got endpoints: latency-svc-qm6fd [1.042318045s] May 26 00:45:36.296: INFO: Created: latency-svc-td5sh May 26 00:45:36.321: INFO: Got endpoints: latency-svc-td5sh [1.080372684s] May 26 00:45:36.356: INFO: Created: latency-svc-548q2 May 26 00:45:36.369: INFO: Got endpoints: latency-svc-548q2 [1.036886567s] May 26 00:45:36.447: INFO: Created: latency-svc-w6xdr May 26 00:45:36.453: INFO: Got endpoints: latency-svc-w6xdr [1.083801992s] May 26 00:45:36.470: INFO: Created: latency-svc-4z7gf May 26 00:45:36.483: INFO: Got endpoints: latency-svc-4z7gf [1.08465689s] May 26 00:45:36.506: INFO: Created: latency-svc-sdndt May 26 00:45:36.520: INFO: Got endpoints: latency-svc-sdndt [1.020658792s] May 26 00:45:36.537: INFO: Created: latency-svc-8w2jq May 26 00:45:36.613: INFO: Got endpoints: latency-svc-8w2jq [1.078649013s] May 26 00:45:36.615: INFO: Created: latency-svc-9cjxh May 26 00:45:36.620: INFO: Got endpoints: latency-svc-9cjxh [972.340667ms] May 26 00:45:36.644: INFO: Created: latency-svc-g6mk9 May 26 00:45:36.668: INFO: Got endpoints: latency-svc-g6mk9 [1.009823339s] May 26 00:45:36.692: INFO: Created: latency-svc-ctmf2 May 26 00:45:36.705: INFO: Got endpoints: latency-svc-ctmf2 [930.631905ms] May 26 00:45:36.782: INFO: Created: latency-svc-4knpw May 26 00:45:36.801: INFO: Got endpoints: latency-svc-4knpw [1.003541362s] May 26 00:45:36.833: INFO: Created: latency-svc-6g2fj May 26 00:45:36.850: INFO: Got endpoints: latency-svc-6g2fj [876.908313ms] May 26 00:45:36.925: INFO: Created: latency-svc-nql8n May 26 00:45:36.940: INFO: Got endpoints: latency-svc-nql8n [956.376637ms] May 26 00:45:36.968: INFO: Created: latency-svc-s7pq7 May 26 00:45:36.988: INFO: Got endpoints: latency-svc-s7pq7 [938.537619ms] May 26 00:45:37.022: INFO: Created: latency-svc-gslnl May 26 00:45:37.080: INFO: Got endpoints: latency-svc-gslnl [874.198628ms] May 26 00:45:37.131: INFO: Created: latency-svc-xfjwv May 26 00:45:37.155: INFO: Got endpoints: latency-svc-xfjwv [924.33335ms] May 26 00:45:37.178: INFO: Created: latency-svc-wcs95 May 26 00:45:37.248: INFO: Got endpoints: latency-svc-wcs95 [927.166053ms] May 26 00:45:37.251: INFO: Created: latency-svc-vdb8s May 26 00:45:37.277: INFO: Got endpoints: latency-svc-vdb8s [907.94939ms] May 26 00:45:37.298: INFO: Created: latency-svc-z6qn2 May 26 00:45:37.404: INFO: Got endpoints: latency-svc-z6qn2 [950.752376ms] May 26 00:45:37.406: INFO: Created: latency-svc-brf45 May 26 00:45:37.424: INFO: Got endpoints: latency-svc-brf45 [940.801823ms] May 26 00:45:37.448: INFO: Created: latency-svc-6tlrs May 26 00:45:37.459: INFO: Got endpoints: latency-svc-6tlrs [938.950683ms] May 26 00:45:37.477: INFO: Created: latency-svc-xtclj May 26 00:45:37.489: INFO: Got endpoints: latency-svc-xtclj [875.558455ms] May 26 00:45:37.547: INFO: Created: latency-svc-dcc2l May 26 00:45:37.551: INFO: Got endpoints: latency-svc-dcc2l [931.141742ms] May 26 00:45:37.574: INFO: Created: latency-svc-h5kpc May 26 00:45:37.598: INFO: Got endpoints: latency-svc-h5kpc [929.87058ms] May 26 00:45:37.622: INFO: Created: latency-svc-qds6c May 26 00:45:37.634: INFO: Got endpoints: latency-svc-qds6c [928.223613ms] May 26 00:45:37.698: INFO: Created: latency-svc-gp8d5 May 26 00:45:37.718: INFO: Created: latency-svc-xx58b May 26 00:45:37.718: INFO: Got endpoints: latency-svc-gp8d5 [916.58905ms] May 26 00:45:37.742: INFO: Got endpoints: latency-svc-xx58b [892.149166ms] May 26 00:45:37.766: INFO: Created: latency-svc-sl7x8 May 26 00:45:37.779: INFO: Got endpoints: latency-svc-sl7x8 [839.068908ms] May 26 00:45:37.854: INFO: Created: latency-svc-tvwr5 May 26 00:45:37.886: INFO: Created: latency-svc-vrdpk May 26 00:45:37.886: INFO: Got endpoints: latency-svc-tvwr5 [897.894215ms] May 26 00:45:37.931: INFO: Got endpoints: latency-svc-vrdpk [850.15226ms] May 26 00:45:37.996: INFO: Created: latency-svc-2mtft May 26 00:45:38.008: INFO: Got endpoints: latency-svc-2mtft [853.444746ms] May 26 00:45:38.090: INFO: Created: latency-svc-4c8g6 May 26 00:45:38.177: INFO: Got endpoints: latency-svc-4c8g6 [929.059461ms] May 26 00:45:38.178: INFO: Created: latency-svc-mm6fj May 26 00:45:38.190: INFO: Got endpoints: latency-svc-mm6fj [912.450073ms] May 26 00:45:38.217: INFO: Created: latency-svc-n66l8 May 26 00:45:38.234: INFO: Got endpoints: latency-svc-n66l8 [830.492834ms] May 26 00:45:38.264: INFO: Created: latency-svc-gdw2l May 26 00:45:38.332: INFO: Got endpoints: latency-svc-gdw2l [907.392291ms] May 26 00:45:38.335: INFO: Created: latency-svc-rb7wm May 26 00:45:38.347: INFO: Got endpoints: latency-svc-rb7wm [888.231372ms] May 26 00:45:38.383: INFO: Created: latency-svc-hwmvj May 26 00:45:38.414: INFO: Got endpoints: latency-svc-hwmvj [924.886193ms] May 26 00:45:38.482: INFO: Created: latency-svc-z874k May 26 00:45:38.510: INFO: Got endpoints: latency-svc-z874k [958.313227ms] May 26 00:45:38.510: INFO: Created: latency-svc-t7xcz May 26 00:45:38.534: INFO: Got endpoints: latency-svc-t7xcz [935.59011ms] May 26 00:45:38.558: INFO: Created: latency-svc-qthdq May 26 00:45:38.571: INFO: Got endpoints: latency-svc-qthdq [936.833145ms] May 26 00:45:38.632: INFO: Created: latency-svc-wm9vx May 26 00:45:38.636: INFO: Got endpoints: latency-svc-wm9vx [918.116015ms] May 26 00:45:38.696: INFO: Created: latency-svc-pg5pc May 26 00:45:38.710: INFO: Got endpoints: latency-svc-pg5pc [967.610127ms] May 26 00:45:38.726: INFO: Created: latency-svc-vkbwz May 26 00:45:38.775: INFO: Got endpoints: latency-svc-vkbwz [996.077903ms] May 26 00:45:38.792: INFO: Created: latency-svc-ts5g8 May 26 00:45:38.816: INFO: Got endpoints: latency-svc-ts5g8 [929.775644ms] May 26 00:45:38.846: INFO: Created: latency-svc-r7b6b May 26 00:45:38.943: INFO: Got endpoints: latency-svc-r7b6b [1.012340166s] May 26 00:45:38.954: INFO: Created: latency-svc-q8gfq May 26 00:45:38.962: INFO: Got endpoints: latency-svc-q8gfq [954.158322ms] May 26 00:45:38.978: INFO: Created: latency-svc-tssz5 May 26 00:45:39.001: INFO: Got endpoints: latency-svc-tssz5 [824.514049ms] May 26 00:45:39.003: INFO: Created: latency-svc-tmf66 May 26 00:45:39.037: INFO: Got endpoints: latency-svc-tmf66 [847.499187ms] May 26 00:45:39.111: INFO: Created: latency-svc-ln7sv May 26 00:45:39.125: INFO: Got endpoints: latency-svc-ln7sv [891.00452ms] May 26 00:45:39.182: INFO: Created: latency-svc-7httl May 26 00:45:39.254: INFO: Got endpoints: latency-svc-7httl [922.101828ms] May 26 00:45:39.257: INFO: Created: latency-svc-8x76w May 26 00:45:39.267: INFO: Got endpoints: latency-svc-8x76w [920.033322ms] May 26 00:45:39.290: INFO: Created: latency-svc-l7bwc May 26 00:45:39.332: INFO: Got endpoints: latency-svc-l7bwc [918.594387ms] May 26 00:45:39.410: INFO: Created: latency-svc-76hrg May 26 00:45:39.415: INFO: Got endpoints: latency-svc-76hrg [904.883252ms] May 26 00:45:39.458: INFO: Created: latency-svc-89hqr May 26 00:45:39.472: INFO: Got endpoints: latency-svc-89hqr [938.359117ms] May 26 00:45:39.487: INFO: Created: latency-svc-hr2sc May 26 00:45:39.503: INFO: Got endpoints: latency-svc-hr2sc [931.890015ms] May 26 00:45:39.549: INFO: Created: latency-svc-j6r6f May 26 00:45:39.566: INFO: Got endpoints: latency-svc-j6r6f [929.822157ms] May 26 00:45:39.590: INFO: Created: latency-svc-2dh89 May 26 00:45:39.614: INFO: Got endpoints: latency-svc-2dh89 [903.875564ms] May 26 00:45:39.643: INFO: Created: latency-svc-xpks4 May 26 00:45:39.715: INFO: Got endpoints: latency-svc-xpks4 [940.179825ms] May 26 00:45:39.717: INFO: Created: latency-svc-rknwn May 26 00:45:39.726: INFO: Got endpoints: latency-svc-rknwn [909.890802ms] May 26 00:45:39.746: INFO: Created: latency-svc-q8zj4 May 26 00:45:39.770: INFO: Got endpoints: latency-svc-q8zj4 [826.596223ms] May 26 00:45:39.794: INFO: Created: latency-svc-pnbtm May 26 00:45:39.804: INFO: Got endpoints: latency-svc-pnbtm [841.811492ms] May 26 00:45:39.852: INFO: Created: latency-svc-2dz29 May 26 00:45:39.858: INFO: Got endpoints: latency-svc-2dz29 [856.724884ms] May 26 00:45:39.914: INFO: Created: latency-svc-z9p6q May 26 00:45:39.925: INFO: Got endpoints: latency-svc-z9p6q [887.931834ms] May 26 00:45:40.003: INFO: Created: latency-svc-cqclb May 26 00:45:40.040: INFO: Got endpoints: latency-svc-cqclb [914.479025ms] May 26 00:45:40.042: INFO: Created: latency-svc-nzxx7 May 26 00:45:40.064: INFO: Got endpoints: latency-svc-nzxx7 [809.674232ms] May 26 00:45:40.093: INFO: Created: latency-svc-l6j7h May 26 00:45:40.140: INFO: Got endpoints: latency-svc-l6j7h [872.848009ms] May 26 00:45:40.184: INFO: Created: latency-svc-6d8jc May 26 00:45:40.219: INFO: Got endpoints: latency-svc-6d8jc [886.91129ms] May 26 00:45:40.272: INFO: Created: latency-svc-wfrjg May 26 00:45:40.297: INFO: Got endpoints: latency-svc-wfrjg [882.757684ms] May 26 00:45:40.327: INFO: Created: latency-svc-z7n7r May 26 00:45:40.342: INFO: Got endpoints: latency-svc-z7n7r [869.672654ms] May 26 00:45:40.363: INFO: Created: latency-svc-qr69g May 26 00:45:40.428: INFO: Got endpoints: latency-svc-qr69g [925.076913ms] May 26 00:45:40.454: INFO: Created: latency-svc-cc9j6 May 26 00:45:40.467: INFO: Got endpoints: latency-svc-cc9j6 [901.136496ms] May 26 00:45:40.489: INFO: Created: latency-svc-qvkmc May 26 00:45:40.498: INFO: Got endpoints: latency-svc-qvkmc [884.005152ms] May 26 00:45:40.519: INFO: Created: latency-svc-n9pr5 May 26 00:45:40.589: INFO: Got endpoints: latency-svc-n9pr5 [874.142934ms] May 26 00:45:40.592: INFO: Created: latency-svc-bj7xc May 26 00:45:40.602: INFO: Got endpoints: latency-svc-bj7xc [876.085432ms] May 26 00:45:40.621: INFO: Created: latency-svc-c4btf May 26 00:45:40.633: INFO: Got endpoints: latency-svc-c4btf [863.420241ms] May 26 00:45:40.651: INFO: Created: latency-svc-2spmn May 26 00:45:40.663: INFO: Got endpoints: latency-svc-2spmn [859.196438ms] May 26 00:45:40.734: INFO: Created: latency-svc-6rzwm May 26 00:45:40.753: INFO: Got endpoints: latency-svc-6rzwm [895.233033ms] May 26 00:45:40.783: INFO: Created: latency-svc-jv5jq May 26 00:45:40.797: INFO: Got endpoints: latency-svc-jv5jq [871.064421ms] May 26 00:45:40.812: INFO: Created: latency-svc-kjbzw May 26 00:45:40.827: INFO: Got endpoints: latency-svc-kjbzw [786.801666ms] May 26 00:45:40.901: INFO: Created: latency-svc-r89zh May 26 00:45:40.904: INFO: Got endpoints: latency-svc-r89zh [840.744124ms] May 26 00:45:40.934: INFO: Created: latency-svc-mjc2b May 26 00:45:40.947: INFO: Got endpoints: latency-svc-mjc2b [807.197077ms] May 26 00:45:40.970: INFO: Created: latency-svc-8j494 May 26 00:45:40.983: INFO: Got endpoints: latency-svc-8j494 [763.802708ms] May 26 00:45:41.044: INFO: Created: latency-svc-bkqr4 May 26 00:45:41.119: INFO: Got endpoints: latency-svc-bkqr4 [821.78584ms] May 26 00:45:41.120: INFO: Created: latency-svc-pztwp May 26 00:45:41.137: INFO: Got endpoints: latency-svc-pztwp [795.414267ms] May 26 00:45:41.215: INFO: Created: latency-svc-cppgt May 26 00:45:41.230: INFO: Got endpoints: latency-svc-cppgt [802.530996ms] May 26 00:45:41.257: INFO: Created: latency-svc-z9hkr May 26 00:45:41.273: INFO: Got endpoints: latency-svc-z9hkr [805.479902ms] May 26 00:45:41.366: INFO: Created: latency-svc-jw9mg May 26 00:45:41.381: INFO: Got endpoints: latency-svc-jw9mg [883.151499ms] May 26 00:45:41.401: INFO: Created: latency-svc-4tv26 May 26 00:45:41.418: INFO: Got endpoints: latency-svc-4tv26 [828.094638ms] May 26 00:45:41.437: INFO: Created: latency-svc-m7fs7 May 26 00:45:41.482: INFO: Got endpoints: latency-svc-m7fs7 [879.421209ms] May 26 00:45:41.527: INFO: Created: latency-svc-tc2c4 May 26 00:45:41.538: INFO: Got endpoints: latency-svc-tc2c4 [904.349774ms] May 26 00:45:41.575: INFO: Created: latency-svc-w4f4z May 26 00:45:41.626: INFO: Got endpoints: latency-svc-w4f4z [962.105551ms] May 26 00:45:41.678: INFO: Created: latency-svc-mhj8l May 26 00:45:41.689: INFO: Got endpoints: latency-svc-mhj8l [935.3564ms] May 26 00:45:41.784: INFO: Created: latency-svc-m96jp May 26 00:45:41.791: INFO: Got endpoints: latency-svc-m96jp [994.186138ms] May 26 00:45:41.815: INFO: Created: latency-svc-vjfxk May 26 00:45:41.839: INFO: Got endpoints: latency-svc-vjfxk [1.012124094s] May 26 00:45:41.875: INFO: Created: latency-svc-d5lj2 May 26 00:45:41.924: INFO: Got endpoints: latency-svc-d5lj2 [1.019682519s] May 26 00:45:41.959: INFO: Created: latency-svc-p48dx May 26 00:45:41.987: INFO: Got endpoints: latency-svc-p48dx [1.040189622s] May 26 00:45:42.075: INFO: Created: latency-svc-btcm5 May 26 00:45:42.126: INFO: Got endpoints: latency-svc-btcm5 [1.14228855s] May 26 00:45:42.169: INFO: Created: latency-svc-njnkx May 26 00:45:42.218: INFO: Got endpoints: latency-svc-njnkx [1.098611857s] May 26 00:45:42.236: INFO: Created: latency-svc-dphgx May 26 00:45:42.258: INFO: Got endpoints: latency-svc-dphgx [1.120449016s] May 26 00:45:42.276: INFO: Created: latency-svc-6rnn6 May 26 00:45:42.288: INFO: Got endpoints: latency-svc-6rnn6 [1.058003497s] May 26 00:45:42.313: INFO: Created: latency-svc-wvlxz May 26 00:45:42.368: INFO: Got endpoints: latency-svc-wvlxz [1.09468768s] May 26 00:45:42.391: INFO: Created: latency-svc-njjsz May 26 00:45:42.415: INFO: Got endpoints: latency-svc-njjsz [1.034230467s] May 26 00:45:42.433: INFO: Created: latency-svc-qncp8 May 26 00:45:42.445: INFO: Got endpoints: latency-svc-qncp8 [1.027276024s] May 26 00:45:42.530: INFO: Created: latency-svc-nrdq5 May 26 00:45:42.559: INFO: Created: latency-svc-z5lnp May 26 00:45:42.559: INFO: Got endpoints: latency-svc-nrdq5 [1.076805222s] May 26 00:45:42.571: INFO: Got endpoints: latency-svc-z5lnp [1.033470511s] May 26 00:45:42.619: INFO: Created: latency-svc-h955t May 26 00:45:42.703: INFO: Got endpoints: latency-svc-h955t [1.077268012s] May 26 00:45:42.704: INFO: Created: latency-svc-c7bms May 26 00:45:42.710: INFO: Got endpoints: latency-svc-c7bms [1.020782414s] May 26 00:45:42.757: INFO: Created: latency-svc-2z4ck May 26 00:45:42.771: INFO: Got endpoints: latency-svc-2z4ck [979.745626ms] May 26 00:45:42.792: INFO: Created: latency-svc-6kcfl May 26 00:45:42.865: INFO: Got endpoints: latency-svc-6kcfl [1.026104754s] May 26 00:45:42.884: INFO: Created: latency-svc-2llqj May 26 00:45:42.897: INFO: Got endpoints: latency-svc-2llqj [972.617359ms] May 26 00:45:42.919: INFO: Created: latency-svc-8f5c8 May 26 00:45:42.933: INFO: Got endpoints: latency-svc-8f5c8 [945.916466ms] May 26 00:45:42.961: INFO: Created: latency-svc-8dmwb May 26 00:45:43.014: INFO: Got endpoints: latency-svc-8dmwb [888.454981ms] May 26 00:45:43.039: INFO: Created: latency-svc-tkgrw May 26 00:45:43.067: INFO: Got endpoints: latency-svc-tkgrw [848.668741ms] May 26 00:45:43.110: INFO: Created: latency-svc-k8knb May 26 00:45:43.176: INFO: Got endpoints: latency-svc-k8knb [918.193904ms] May 26 00:45:43.178: INFO: Created: latency-svc-wh696 May 26 00:45:43.225: INFO: Got endpoints: latency-svc-wh696 [936.66257ms] May 26 00:45:43.255: INFO: Created: latency-svc-dbj22 May 26 00:45:43.332: INFO: Got endpoints: latency-svc-dbj22 [964.085577ms] May 26 00:45:43.345: INFO: Created: latency-svc-vmpt6 May 26 00:45:43.357: INFO: Got endpoints: latency-svc-vmpt6 [942.156502ms] May 26 00:45:43.399: INFO: Created: latency-svc-4gxjc May 26 00:45:43.412: INFO: Got endpoints: latency-svc-4gxjc [966.528168ms] May 26 00:45:43.428: INFO: Created: latency-svc-8q5n7 May 26 00:45:43.494: INFO: Got endpoints: latency-svc-8q5n7 [935.050854ms] May 26 00:45:43.512: INFO: Created: latency-svc-znl6n May 26 00:45:43.537: INFO: Got endpoints: latency-svc-znl6n [965.616635ms] May 26 00:45:43.561: INFO: Created: latency-svc-flswn May 26 00:45:43.575: INFO: Got endpoints: latency-svc-flswn [872.035687ms] May 26 00:45:43.590: INFO: Created: latency-svc-2gpg9 May 26 00:45:43.631: INFO: Got endpoints: latency-svc-2gpg9 [921.59091ms] May 26 00:45:43.631: INFO: Latencies: [220.440724ms 244.317693ms 290.081419ms 370.158724ms 393.606576ms 483.535974ms 506.987517ms 536.907424ms 657.18571ms 687.766661ms 763.802708ms 786.801666ms 795.414267ms 802.530996ms 805.479902ms 807.197077ms 809.674232ms 821.78584ms 824.514049ms 826.596223ms 828.094638ms 830.492834ms 839.068908ms 840.744124ms 841.811492ms 844.459308ms 847.499187ms 848.668741ms 850.15226ms 853.444746ms 856.724884ms 857.409865ms 859.196438ms 863.420241ms 869.672654ms 871.064421ms 872.035687ms 872.848009ms 874.142934ms 874.198628ms 875.558455ms 876.085432ms 876.908313ms 879.421209ms 882.757684ms 883.151499ms 884.005152ms 886.91129ms 887.931834ms 888.231372ms 888.454981ms 891.00452ms 892.149166ms 895.233033ms 897.894215ms 898.76426ms 901.136496ms 903.875564ms 904.349774ms 904.883252ms 907.392291ms 907.94939ms 909.890802ms 912.450073ms 914.479025ms 915.404803ms 916.58905ms 918.116015ms 918.193904ms 918.594387ms 920.033322ms 921.59091ms 922.101828ms 924.33335ms 924.886193ms 925.076913ms 927.166053ms 928.223613ms 928.734298ms 929.059461ms 929.775644ms 929.822157ms 929.87058ms 930.631905ms 931.141742ms 931.890015ms 935.050854ms 935.3564ms 935.59011ms 936.66257ms 936.833145ms 938.359117ms 938.537619ms 938.950683ms 940.179825ms 940.801823ms 942.156502ms 944.090439ms 945.916466ms 950.752376ms 954.158322ms 955.850924ms 956.376637ms 957.718718ms 958.313227ms 959.0749ms 962.105551ms 964.085577ms 965.616635ms 966.117095ms 966.528168ms 967.610127ms 972.340667ms 972.617359ms 979.456184ms 979.745626ms 980.256179ms 981.893399ms 986.080404ms 994.186138ms 996.077903ms 998.306798ms 1.003541362s 1.009823339s 1.012124094s 1.012340166s 1.014597647s 1.019682519s 1.020658792s 1.020782414s 1.021006034s 1.021722471s 1.026104754s 1.027276024s 1.029934076s 1.03027361s 1.033470511s 1.034230467s 1.036886567s 1.040189622s 1.042318045s 1.044766079s 1.056977313s 1.058003497s 1.059072215s 1.059808214s 1.0604942s 1.069913521s 1.073611593s 1.076805222s 1.077268012s 1.078649013s 1.080094053s 1.080372684s 1.081290931s 1.083801992s 1.08465689s 1.085089624s 1.090183926s 1.090516237s 1.092199765s 1.09468768s 1.096719046s 1.098611857s 1.099440932s 1.101504231s 1.101749164s 1.105977721s 1.108080077s 1.110848756s 1.11461641s 1.116044958s 1.116103984s 1.120449016s 1.130452901s 1.136065858s 1.140729444s 1.141786229s 1.14228855s 1.167052969s 1.186002722s 1.187428902s 1.330126086s 1.341142024s 1.343790476s 1.357757312s 1.384439132s 1.422174778s 1.634355084s 1.650949869s 1.655863557s 1.672435847s 1.681810759s 1.688272355s 1.709876745s 1.714557214s 1.725162373s 1.732948059s 1.734992196s 1.755665166s] May 26 00:45:43.632: INFO: 50 %ile: 954.158322ms May 26 00:45:43.632: INFO: 90 %ile: 1.186002722s May 26 00:45:43.632: INFO: 99 %ile: 1.734992196s May 26 00:45:43.632: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:45:43.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3155" for this suite. • [SLOW TEST:17.965 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":216,"skipped":3575,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:45:43.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 26 00:45:44.327: INFO: Pod name wrapped-volume-race-398723aa-f4ae-46d1-b9fc-e788fa17021c: Found 0 pods out of 5 May 26 00:45:49.366: INFO: Pod name wrapped-volume-race-398723aa-f4ae-46d1-b9fc-e788fa17021c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-398723aa-f4ae-46d1-b9fc-e788fa17021c in namespace emptydir-wrapper-197, will wait for the garbage collector to delete the pods May 26 00:46:05.599: INFO: Deleting ReplicationController wrapped-volume-race-398723aa-f4ae-46d1-b9fc-e788fa17021c took: 27.437958ms May 26 00:46:05.799: INFO: Terminating ReplicationController wrapped-volume-race-398723aa-f4ae-46d1-b9fc-e788fa17021c pods took: 200.215183ms STEP: Creating RC which spawns configmap-volume pods May 26 00:46:15.307: INFO: Pod name wrapped-volume-race-e825782c-2e25-4a99-9f91-0000c1dd407a: Found 0 pods out of 5 May 26 00:46:20.325: INFO: Pod name wrapped-volume-race-e825782c-2e25-4a99-9f91-0000c1dd407a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e825782c-2e25-4a99-9f91-0000c1dd407a in namespace emptydir-wrapper-197, will wait for the garbage collector to delete the pods May 26 00:46:34.429: INFO: Deleting ReplicationController wrapped-volume-race-e825782c-2e25-4a99-9f91-0000c1dd407a took: 7.504061ms May 26 00:46:34.529: INFO: Terminating ReplicationController wrapped-volume-race-e825782c-2e25-4a99-9f91-0000c1dd407a pods took: 100.232401ms STEP: Creating RC which spawns configmap-volume pods May 26 00:46:45.315: INFO: Pod name wrapped-volume-race-a7e92e9e-ac33-4264-9d94-a347638c4566: Found 0 pods out of 5 May 26 00:46:50.323: INFO: Pod name wrapped-volume-race-a7e92e9e-ac33-4264-9d94-a347638c4566: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a7e92e9e-ac33-4264-9d94-a347638c4566 in namespace emptydir-wrapper-197, will wait for the garbage collector to delete the pods May 26 00:47:02.430: INFO: Deleting ReplicationController wrapped-volume-race-a7e92e9e-ac33-4264-9d94-a347638c4566 took: 6.501151ms May 26 00:47:02.830: INFO: Terminating ReplicationController wrapped-volume-race-a7e92e9e-ac33-4264-9d94-a347638c4566 pods took: 400.263709ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:47:16.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-197" for this suite. • [SLOW TEST:92.421 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":217,"skipped":3588,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:47:16.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:47:16.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a2e3846-f121-45b3-8462-f19e666528f1" in namespace "projected-5397" to be "Succeeded or Failed" May 26 00:47:16.177: INFO: Pod "downwardapi-volume-3a2e3846-f121-45b3-8462-f19e666528f1": Phase="Pending", Reason="", readiness=false. Elapsed: 57.63375ms May 26 00:47:18.181: INFO: Pod "downwardapi-volume-3a2e3846-f121-45b3-8462-f19e666528f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061591286s May 26 00:47:20.186: INFO: Pod "downwardapi-volume-3a2e3846-f121-45b3-8462-f19e666528f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06624505s STEP: Saw pod success May 26 00:47:20.186: INFO: Pod "downwardapi-volume-3a2e3846-f121-45b3-8462-f19e666528f1" satisfied condition "Succeeded or Failed" May 26 00:47:20.189: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3a2e3846-f121-45b3-8462-f19e666528f1 container client-container: STEP: delete the pod May 26 00:47:20.329: INFO: Waiting for pod downwardapi-volume-3a2e3846-f121-45b3-8462-f19e666528f1 to disappear May 26 00:47:20.349: INFO: Pod downwardapi-volume-3a2e3846-f121-45b3-8462-f19e666528f1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:47:20.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5397" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":218,"skipped":3607,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:47:20.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-cab8a46c-297c-4806-b604-ac1ab797bee6 May 26 00:47:20.524: INFO: Pod name my-hostname-basic-cab8a46c-297c-4806-b604-ac1ab797bee6: Found 0 pods out of 1 May 26 00:47:25.578: INFO: Pod name my-hostname-basic-cab8a46c-297c-4806-b604-ac1ab797bee6: Found 1 pods out of 1 May 26 00:47:25.578: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-cab8a46c-297c-4806-b604-ac1ab797bee6" are running May 26 00:47:25.619: INFO: Pod "my-hostname-basic-cab8a46c-297c-4806-b604-ac1ab797bee6-mv49g" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 00:47:20 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 00:47:24 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 00:47:24 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 00:47:20 +0000 UTC Reason: Message:}]) May 26 00:47:25.620: INFO: Trying to dial the pod May 26 00:47:30.631: INFO: Controller my-hostname-basic-cab8a46c-297c-4806-b604-ac1ab797bee6: Got expected result from replica 1 [my-hostname-basic-cab8a46c-297c-4806-b604-ac1ab797bee6-mv49g]: "my-hostname-basic-cab8a46c-297c-4806-b604-ac1ab797bee6-mv49g", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:47:30.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3898" for this suite. • [SLOW TEST:10.281 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":219,"skipped":3623,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:47:30.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:47:30.689: INFO: Creating ReplicaSet my-hostname-basic-98a3a242-0b06-422e-9396-a7809fa6f750 May 26 00:47:30.728: INFO: Pod name my-hostname-basic-98a3a242-0b06-422e-9396-a7809fa6f750: Found 0 pods out of 1 May 26 00:47:35.808: INFO: Pod name my-hostname-basic-98a3a242-0b06-422e-9396-a7809fa6f750: Found 1 pods out of 1 May 26 00:47:35.808: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-98a3a242-0b06-422e-9396-a7809fa6f750" is running May 26 00:47:35.810: INFO: Pod "my-hostname-basic-98a3a242-0b06-422e-9396-a7809fa6f750-g7slc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 00:47:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 00:47:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 00:47:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 00:47:30 +0000 UTC Reason: Message:}]) May 26 00:47:35.811: INFO: Trying to dial the pod May 26 00:47:40.822: INFO: Controller my-hostname-basic-98a3a242-0b06-422e-9396-a7809fa6f750: Got expected result from replica 1 [my-hostname-basic-98a3a242-0b06-422e-9396-a7809fa6f750-g7slc]: "my-hostname-basic-98a3a242-0b06-422e-9396-a7809fa6f750-g7slc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:47:40.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4979" for this suite. • [SLOW TEST:10.193 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":220,"skipped":3624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:47:40.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:47:40.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c12d63f-d23a-432e-ab25-a944befeffe0" in namespace "projected-6929" to be "Succeeded or Failed" May 26 00:47:40.903: INFO: Pod "downwardapi-volume-4c12d63f-d23a-432e-ab25-a944befeffe0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.3117ms May 26 00:47:42.907: INFO: Pod "downwardapi-volume-4c12d63f-d23a-432e-ab25-a944befeffe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007551774s May 26 00:47:44.911: INFO: Pod "downwardapi-volume-4c12d63f-d23a-432e-ab25-a944befeffe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011677917s STEP: Saw pod success May 26 00:47:44.911: INFO: Pod "downwardapi-volume-4c12d63f-d23a-432e-ab25-a944befeffe0" satisfied condition "Succeeded or Failed" May 26 00:47:44.914: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4c12d63f-d23a-432e-ab25-a944befeffe0 container client-container: STEP: delete the pod May 26 00:47:44.964: INFO: Waiting for pod downwardapi-volume-4c12d63f-d23a-432e-ab25-a944befeffe0 to disappear May 26 00:47:45.057: INFO: Pod downwardapi-volume-4c12d63f-d23a-432e-ab25-a944befeffe0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:47:45.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6929" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":221,"skipped":3652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:47:45.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-0380f812-f530-47d5-bc6f-55cf50130541 in namespace container-probe-6086 May 26 00:47:49.230: INFO: Started pod busybox-0380f812-f530-47d5-bc6f-55cf50130541 in namespace container-probe-6086 STEP: checking the pod's current state and verifying that restartCount is present May 26 00:47:49.233: INFO: Initial restart count of pod busybox-0380f812-f530-47d5-bc6f-55cf50130541 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:51:50.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6086" for this suite. • [SLOW TEST:245.099 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":222,"skipped":3685,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:51:50.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:51:50.283: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d50d6d6-4fbb-409c-adb4-8d99d4fefe93" in namespace "downward-api-49" to be "Succeeded or Failed" May 26 00:51:50.305: INFO: Pod "downwardapi-volume-7d50d6d6-4fbb-409c-adb4-8d99d4fefe93": Phase="Pending", Reason="", readiness=false. Elapsed: 22.225216ms May 26 00:51:52.335: INFO: Pod "downwardapi-volume-7d50d6d6-4fbb-409c-adb4-8d99d4fefe93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052083198s May 26 00:51:54.339: INFO: Pod "downwardapi-volume-7d50d6d6-4fbb-409c-adb4-8d99d4fefe93": Phase="Running", Reason="", readiness=true. Elapsed: 4.055995361s May 26 00:51:56.343: INFO: Pod "downwardapi-volume-7d50d6d6-4fbb-409c-adb4-8d99d4fefe93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060204988s STEP: Saw pod success May 26 00:51:56.343: INFO: Pod "downwardapi-volume-7d50d6d6-4fbb-409c-adb4-8d99d4fefe93" satisfied condition "Succeeded or Failed" May 26 00:51:56.345: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7d50d6d6-4fbb-409c-adb4-8d99d4fefe93 container client-container: STEP: delete the pod May 26 00:51:56.438: INFO: Waiting for pod downwardapi-volume-7d50d6d6-4fbb-409c-adb4-8d99d4fefe93 to disappear May 26 00:51:56.445: INFO: Pod downwardapi-volume-7d50d6d6-4fbb-409c-adb4-8d99d4fefe93 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:51:56.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-49" for this suite. • [SLOW TEST:6.292 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":223,"skipped":3690,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:51:56.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:52:07.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2385" for this suite. • [SLOW TEST:11.253 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":224,"skipped":3694,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:52:07.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-2feab6cc-73ac-4670-8946-59e1c96aae8e STEP: Creating configMap with name cm-test-opt-upd-10ae8bee-eb2c-4ab9-9026-3f32dd208972 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2feab6cc-73ac-4670-8946-59e1c96aae8e STEP: Updating configmap cm-test-opt-upd-10ae8bee-eb2c-4ab9-9026-3f32dd208972 STEP: Creating configMap with name cm-test-opt-create-8c7766d3-ba6b-4ddc-9719-36abaea68936 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:52:15.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6498" for this suite. • [SLOW TEST:8.273 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":225,"skipped":3702,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:52:15.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:52:16.033: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 26 00:52:18.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4351 create -f -' May 26 00:52:24.973: INFO: stderr: "" May 26 00:52:24.973: INFO: stdout: "e2e-test-crd-publish-openapi-6432-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 26 00:52:24.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4351 delete e2e-test-crd-publish-openapi-6432-crds test-cr' May 26 00:52:25.102: INFO: stderr: "" May 26 00:52:25.102: INFO: stdout: "e2e-test-crd-publish-openapi-6432-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 26 00:52:25.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4351 apply -f -' May 26 00:52:27.761: INFO: stderr: "" May 26 00:52:27.761: INFO: stdout: "e2e-test-crd-publish-openapi-6432-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 26 00:52:27.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4351 delete e2e-test-crd-publish-openapi-6432-crds test-cr' May 26 00:52:27.909: INFO: stderr: "" May 26 00:52:27.909: INFO: stdout: "e2e-test-crd-publish-openapi-6432-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 26 00:52:27.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6432-crds' May 26 00:52:30.856: INFO: stderr: "" May 26 00:52:30.856: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6432-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:52:33.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4351" for this suite. • [SLOW TEST:17.816 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":226,"skipped":3721,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:52:33.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4706 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 26 00:52:33.904: INFO: Found 0 stateful pods, waiting for 3 May 26 00:52:43.911: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 00:52:43.911: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 00:52:43.911: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 26 00:52:53.909: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 00:52:53.910: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 00:52:53.910: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 26 00:52:53.938: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 26 00:53:04.024: INFO: Updating stateful set ss2 May 26 00:53:04.065: INFO: Waiting for Pod statefulset-4706/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 26 00:53:14.074: INFO: Waiting for Pod statefulset-4706/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 26 00:53:25.011: INFO: Found 2 stateful pods, waiting for 3 May 26 00:53:35.016: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 00:53:35.016: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 00:53:35.016: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 26 00:53:35.041: INFO: Updating stateful set ss2 May 26 00:53:35.091: INFO: Waiting for Pod statefulset-4706/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 26 00:53:45.119: INFO: Updating stateful set ss2 May 26 00:53:45.211: INFO: Waiting for StatefulSet statefulset-4706/ss2 to complete update May 26 00:53:45.211: INFO: Waiting for Pod statefulset-4706/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 26 00:53:55.220: INFO: Waiting for StatefulSet statefulset-4706/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 26 00:54:05.219: INFO: Deleting all statefulset in ns statefulset-4706 May 26 00:54:05.221: INFO: Scaling statefulset ss2 to 0 May 26 00:54:25.252: INFO: Waiting for statefulset status.replicas updated to 0 May 26 00:54:25.255: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:54:25.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4706" for this suite. • [SLOW TEST:111.488 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":227,"skipped":3751,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:54:25.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-8a859a7c-1352-4537-8e57-499b06a7e6ce STEP: Creating a pod to test consume secrets May 26 00:54:25.384: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-49a50874-150f-4624-a416-9807f5fc9d48" in namespace "projected-2629" to be "Succeeded or Failed" May 26 00:54:25.402: INFO: Pod "pod-projected-secrets-49a50874-150f-4624-a416-9807f5fc9d48": Phase="Pending", Reason="", readiness=false. Elapsed: 17.838495ms May 26 00:54:27.407: INFO: Pod "pod-projected-secrets-49a50874-150f-4624-a416-9807f5fc9d48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022465783s May 26 00:54:29.411: INFO: Pod "pod-projected-secrets-49a50874-150f-4624-a416-9807f5fc9d48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026828438s STEP: Saw pod success May 26 00:54:29.411: INFO: Pod "pod-projected-secrets-49a50874-150f-4624-a416-9807f5fc9d48" satisfied condition "Succeeded or Failed" May 26 00:54:29.414: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-49a50874-150f-4624-a416-9807f5fc9d48 container projected-secret-volume-test: STEP: delete the pod May 26 00:54:29.460: INFO: Waiting for pod pod-projected-secrets-49a50874-150f-4624-a416-9807f5fc9d48 to disappear May 26 00:54:29.473: INFO: Pod pod-projected-secrets-49a50874-150f-4624-a416-9807f5fc9d48 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:54:29.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2629" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":228,"skipped":3761,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:54:29.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-188c4c8d-12a4-46f7-85eb-4729fcdb53fb STEP: Creating a pod to test consume configMaps May 26 00:54:29.650: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b6b10159-a1de-4009-9546-346da6503b43" in namespace "projected-435" to be "Succeeded or Failed" May 26 00:54:29.653: INFO: Pod "pod-projected-configmaps-b6b10159-a1de-4009-9546-346da6503b43": Phase="Pending", Reason="", readiness=false. Elapsed: 3.164338ms May 26 00:54:31.712: INFO: Pod "pod-projected-configmaps-b6b10159-a1de-4009-9546-346da6503b43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062851024s May 26 00:54:33.717: INFO: Pod "pod-projected-configmaps-b6b10159-a1de-4009-9546-346da6503b43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067203619s STEP: Saw pod success May 26 00:54:33.717: INFO: Pod "pod-projected-configmaps-b6b10159-a1de-4009-9546-346da6503b43" satisfied condition "Succeeded or Failed" May 26 00:54:33.720: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-b6b10159-a1de-4009-9546-346da6503b43 container projected-configmap-volume-test: STEP: delete the pod May 26 00:54:33.756: INFO: Waiting for pod pod-projected-configmaps-b6b10159-a1de-4009-9546-346da6503b43 to disappear May 26 00:54:33.783: INFO: Pod pod-projected-configmaps-b6b10159-a1de-4009-9546-346da6503b43 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:54:33.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-435" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":229,"skipped":3778,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:54:33.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 26 00:54:38.434: INFO: Successfully updated pod "labelsupdateeb71a991-291a-47fe-bdf5-17da073a87d7" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:54:42.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7818" for this suite. • [SLOW TEST:8.663 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":230,"skipped":3787,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:54:42.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6618 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6618 STEP: creating replication controller externalsvc in namespace services-6618 I0526 00:54:42.733782 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6618, replica count: 2 I0526 00:54:45.784240 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:54:48.784557 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 26 00:54:48.839: INFO: Creating new exec pod May 26 00:54:52.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6618 execpodwqvzj -- /bin/sh -x -c nslookup clusterip-service' May 26 00:54:53.305: INFO: stderr: "I0526 00:54:53.010141 3020 log.go:172] (0xc0006dc790) (0xc00071b4a0) Create stream\nI0526 00:54:53.010213 3020 log.go:172] (0xc0006dc790) (0xc00071b4a0) Stream added, broadcasting: 1\nI0526 00:54:53.020779 3020 log.go:172] (0xc0006dc790) Reply frame received for 1\nI0526 00:54:53.020837 3020 log.go:172] (0xc0006dc790) (0xc000512d20) Create stream\nI0526 00:54:53.020850 3020 log.go:172] (0xc0006dc790) (0xc000512d20) Stream added, broadcasting: 3\nI0526 00:54:53.022285 3020 log.go:172] (0xc0006dc790) Reply frame received for 3\nI0526 00:54:53.022323 3020 log.go:172] (0xc0006dc790) (0xc0002521e0) Create stream\nI0526 00:54:53.022339 3020 log.go:172] (0xc0006dc790) (0xc0002521e0) Stream added, broadcasting: 5\nI0526 00:54:53.024287 3020 log.go:172] (0xc0006dc790) Reply frame received for 5\nI0526 00:54:53.095788 3020 log.go:172] (0xc0006dc790) Data frame received for 5\nI0526 00:54:53.095809 3020 log.go:172] (0xc0002521e0) (5) Data frame handling\nI0526 00:54:53.095821 3020 log.go:172] (0xc0002521e0) (5) Data frame sent\n+ nslookup clusterip-service\nI0526 00:54:53.296343 3020 log.go:172] (0xc0006dc790) Data frame received for 3\nI0526 00:54:53.296365 3020 log.go:172] (0xc000512d20) (3) Data frame handling\nI0526 00:54:53.296379 3020 log.go:172] (0xc000512d20) (3) Data frame sent\nI0526 00:54:53.297104 3020 log.go:172] (0xc0006dc790) Data frame received for 3\nI0526 00:54:53.297237 3020 log.go:172] (0xc000512d20) (3) Data frame handling\nI0526 00:54:53.297250 3020 log.go:172] (0xc000512d20) (3) Data frame sent\nI0526 00:54:53.297923 3020 log.go:172] (0xc0006dc790) Data frame received for 3\nI0526 00:54:53.297941 3020 log.go:172] (0xc000512d20) (3) Data frame handling\nI0526 00:54:53.298025 3020 log.go:172] (0xc0006dc790) Data frame received for 5\nI0526 00:54:53.298039 3020 log.go:172] (0xc0002521e0) (5) Data frame handling\nI0526 00:54:53.299430 3020 log.go:172] (0xc0006dc790) Data frame received for 1\nI0526 00:54:53.299445 3020 log.go:172] (0xc00071b4a0) (1) Data frame handling\nI0526 00:54:53.299455 3020 log.go:172] (0xc00071b4a0) (1) Data frame sent\nI0526 00:54:53.299466 3020 log.go:172] (0xc0006dc790) (0xc00071b4a0) Stream removed, broadcasting: 1\nI0526 00:54:53.299479 3020 log.go:172] (0xc0006dc790) Go away received\nI0526 00:54:53.299795 3020 log.go:172] (0xc0006dc790) (0xc00071b4a0) Stream removed, broadcasting: 1\nI0526 00:54:53.299816 3020 log.go:172] (0xc0006dc790) (0xc000512d20) Stream removed, broadcasting: 3\nI0526 00:54:53.299828 3020 log.go:172] (0xc0006dc790) (0xc0002521e0) Stream removed, broadcasting: 5\n" May 26 00:54:53.305: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6618.svc.cluster.local\tcanonical name = externalsvc.services-6618.svc.cluster.local.\nName:\texternalsvc.services-6618.svc.cluster.local\nAddress: 10.104.218.193\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6618, will wait for the garbage collector to delete the pods May 26 00:54:53.373: INFO: Deleting ReplicationController externalsvc took: 4.011948ms May 26 00:54:53.774: INFO: Terminating ReplicationController externalsvc pods took: 400.339621ms May 26 00:55:05.350: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:55:05.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6618" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:22.900 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":231,"skipped":3802,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:55:05.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 26 00:55:05.435: INFO: Waiting up to 5m0s for pod "pod-94f39a67-2c2a-4c3a-986f-7e9e7c289b33" in namespace "emptydir-8100" to be "Succeeded or Failed" May 26 00:55:05.468: INFO: Pod "pod-94f39a67-2c2a-4c3a-986f-7e9e7c289b33": Phase="Pending", Reason="", readiness=false. Elapsed: 32.140118ms May 26 00:55:07.471: INFO: Pod "pod-94f39a67-2c2a-4c3a-986f-7e9e7c289b33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035245057s May 26 00:55:09.475: INFO: Pod "pod-94f39a67-2c2a-4c3a-986f-7e9e7c289b33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039375871s STEP: Saw pod success May 26 00:55:09.475: INFO: Pod "pod-94f39a67-2c2a-4c3a-986f-7e9e7c289b33" satisfied condition "Succeeded or Failed" May 26 00:55:09.477: INFO: Trying to get logs from node latest-worker pod pod-94f39a67-2c2a-4c3a-986f-7e9e7c289b33 container test-container: STEP: delete the pod May 26 00:55:09.519: INFO: Waiting for pod pod-94f39a67-2c2a-4c3a-986f-7e9e7c289b33 to disappear May 26 00:55:09.548: INFO: Pod pod-94f39a67-2c2a-4c3a-986f-7e9e7c289b33 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:55:09.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8100" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":232,"skipped":3818,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:55:09.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:55:09.667: INFO: Create a RollingUpdate DaemonSet May 26 00:55:09.671: INFO: Check that daemon pods launch on every node of the cluster May 26 00:55:09.699: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:55:09.744: INFO: Number of nodes with available pods: 0 May 26 00:55:09.744: INFO: Node latest-worker is running more than one daemon pod May 26 00:55:10.835: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:55:11.008: INFO: Number of nodes with available pods: 0 May 26 00:55:11.008: INFO: Node latest-worker is running more than one daemon pod May 26 00:55:11.765: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:55:11.797: INFO: Number of nodes with available pods: 0 May 26 00:55:11.797: INFO: Node latest-worker is running more than one daemon pod May 26 00:55:12.968: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:55:13.133: INFO: Number of nodes with available pods: 0 May 26 00:55:13.133: INFO: Node latest-worker is running more than one daemon pod May 26 00:55:13.748: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:55:13.751: INFO: Number of nodes with available pods: 0 May 26 00:55:13.751: INFO: Node latest-worker is running more than one daemon pod May 26 00:55:14.757: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:55:14.770: INFO: Number of nodes with available pods: 1 May 26 00:55:14.770: INFO: Node latest-worker is running more than one daemon pod May 26 00:55:15.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:55:15.753: INFO: Number of nodes with available pods: 2 May 26 00:55:15.753: INFO: Number of running nodes: 2, number of available pods: 2 May 26 00:55:15.753: INFO: Update the DaemonSet to trigger a rollout May 26 00:55:15.761: INFO: Updating DaemonSet daemon-set May 26 00:55:25.834: INFO: Roll back the DaemonSet before rollout is complete May 26 00:55:25.842: INFO: Updating DaemonSet daemon-set May 26 00:55:25.842: INFO: Make sure DaemonSet rollback is complete May 26 00:55:25.868: INFO: Wrong image for pod: daemon-set-k42qg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 26 00:55:25.868: INFO: Pod daemon-set-k42qg is not available May 26 00:55:25.879: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:55:26.883: INFO: Wrong image for pod: daemon-set-k42qg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 26 00:55:26.883: INFO: Pod daemon-set-k42qg is not available May 26 00:55:26.886: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:55:27.948: INFO: Wrong image for pod: daemon-set-k42qg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 26 00:55:27.948: INFO: Pod daemon-set-k42qg is not available May 26 00:55:27.953: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 00:55:28.885: INFO: Pod daemon-set-jm4bq is not available May 26 00:55:28.890: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1253, will wait for the garbage collector to delete the pods May 26 00:55:28.956: INFO: Deleting DaemonSet.extensions daemon-set took: 6.424152ms May 26 00:55:29.357: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.544527ms May 26 00:55:35.260: INFO: Number of nodes with available pods: 0 May 26 00:55:35.260: INFO: Number of running nodes: 0, number of available pods: 0 May 26 00:55:35.263: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1253/daemonsets","resourceVersion":"7698612"},"items":null} May 26 00:55:35.266: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1253/pods","resourceVersion":"7698612"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:55:35.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1253" for this suite. • [SLOW TEST:25.725 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":233,"skipped":3833,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:55:35.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 26 00:55:35.365: INFO: Waiting up to 5m0s for pod "pod-b7601ea7-e166-4cfc-bd6d-d8326fc459df" in namespace "emptydir-1895" to be "Succeeded or Failed" May 26 00:55:35.368: INFO: Pod "pod-b7601ea7-e166-4cfc-bd6d-d8326fc459df": Phase="Pending", Reason="", readiness=false. Elapsed: 3.037274ms May 26 00:55:37.372: INFO: Pod "pod-b7601ea7-e166-4cfc-bd6d-d8326fc459df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007018413s May 26 00:55:39.376: INFO: Pod "pod-b7601ea7-e166-4cfc-bd6d-d8326fc459df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011227301s STEP: Saw pod success May 26 00:55:39.377: INFO: Pod "pod-b7601ea7-e166-4cfc-bd6d-d8326fc459df" satisfied condition "Succeeded or Failed" May 26 00:55:39.380: INFO: Trying to get logs from node latest-worker pod pod-b7601ea7-e166-4cfc-bd6d-d8326fc459df container test-container: STEP: delete the pod May 26 00:55:39.422: INFO: Waiting for pod pod-b7601ea7-e166-4cfc-bd6d-d8326fc459df to disappear May 26 00:55:39.429: INFO: Pod pod-b7601ea7-e166-4cfc-bd6d-d8326fc459df no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:55:39.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1895" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":234,"skipped":3843,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:55:39.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 00:55:40.365: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 00:55:42.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051340, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051340, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051340, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051340, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 00:55:45.414: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:55:45.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9164" for this suite. STEP: Destroying namespace "webhook-9164-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.288 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":235,"skipped":3860,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:55:45.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 26 00:55:50.440: INFO: Successfully updated pod "annotationupdateff6ccae6-13c1-4aff-abeb-34ef5a16d429" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:55:54.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1744" for this suite. • [SLOW TEST:8.765 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":236,"skipped":3881,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:55:54.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-6c1f2bf4-c5bc-44e3-94e8-75799cf58235 STEP: Creating a pod to test consume secrets May 26 00:55:54.558: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2771d48a-5809-4066-845f-9b06949a4a50" in namespace "projected-6147" to be "Succeeded or Failed" May 26 00:55:54.562: INFO: Pod "pod-projected-secrets-2771d48a-5809-4066-845f-9b06949a4a50": Phase="Pending", Reason="", readiness=false. Elapsed: 3.996319ms May 26 00:55:56.576: INFO: Pod "pod-projected-secrets-2771d48a-5809-4066-845f-9b06949a4a50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018230698s May 26 00:55:58.581: INFO: Pod "pod-projected-secrets-2771d48a-5809-4066-845f-9b06949a4a50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022934189s STEP: Saw pod success May 26 00:55:58.581: INFO: Pod "pod-projected-secrets-2771d48a-5809-4066-845f-9b06949a4a50" satisfied condition "Succeeded or Failed" May 26 00:55:58.584: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-2771d48a-5809-4066-845f-9b06949a4a50 container projected-secret-volume-test: STEP: delete the pod May 26 00:55:58.632: INFO: Waiting for pod pod-projected-secrets-2771d48a-5809-4066-845f-9b06949a4a50 to disappear May 26 00:55:58.640: INFO: Pod pod-projected-secrets-2771d48a-5809-4066-845f-9b06949a4a50 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:55:58.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6147" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":3927,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:55:58.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 00:55:58.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5766' May 26 00:55:59.554: INFO: stderr: "" May 26 00:55:59.554: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 26 00:55:59.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5766' May 26 00:56:02.526: INFO: stderr: "" May 26 00:56:02.526: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 26 00:56:03.732: INFO: Selector matched 1 pods for map[app:agnhost] May 26 00:56:03.732: INFO: Found 0 / 1 May 26 00:56:04.531: INFO: Selector matched 1 pods for map[app:agnhost] May 26 00:56:04.531: INFO: Found 0 / 1 May 26 00:56:05.532: INFO: Selector matched 1 pods for map[app:agnhost] May 26 00:56:05.532: INFO: Found 1 / 1 May 26 00:56:05.532: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 26 00:56:05.535: INFO: Selector matched 1 pods for map[app:agnhost] May 26 00:56:05.535: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 26 00:56:05.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-z9828 --namespace=kubectl-5766' May 26 00:56:05.669: INFO: stderr: "" May 26 00:56:05.669: INFO: stdout: "Name: agnhost-master-z9828\nNamespace: kubectl-5766\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Tue, 26 May 2020 00:55:59 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.235\nIPs:\n IP: 10.244.1.235\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://564289c83537ca2d24b2ee07383ccc15590fdfa06dc1fd953ecc95fe64ab0d49\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 26 May 2020 00:56:04 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-whhxp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-whhxp:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-whhxp\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-5766/agnhost-master-z9828 to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" May 26 00:56:05.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5766' May 26 00:56:05.801: INFO: stderr: "" May 26 00:56:05.801: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5766\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: agnhost-master-z9828\n" May 26 00:56:05.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5766' May 26 00:56:05.915: INFO: stderr: "" May 26 00:56:05.915: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5766\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.107.155.135\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.235:6379\nSession Affinity: None\nEvents: \n" May 26 00:56:05.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 26 00:56:06.041: INFO: stderr: "" May 26 00:56:06.041: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Tue, 26 May 2020 00:56:03 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 26 May 2020 00:53:25 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 26 May 2020 00:53:25 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 26 May 2020 00:53:25 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 26 May 2020 00:53:25 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 26d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 26d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 26d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 26d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 26d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 26d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 26 00:56:06.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-5766' May 26 00:56:06.170: INFO: stderr: "" May 26 00:56:06.170: INFO: stdout: "Name: kubectl-5766\nLabels: e2e-framework=kubectl\n e2e-run=ce190069-f608-423e-bdd0-4e5c7740fb6a\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:56:06.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5766" for this suite. • [SLOW TEST:7.557 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":238,"skipped":3945,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:56:06.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 00:56:06.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c38fbecc-d8cd-448e-870e-092bec1265b1" in namespace "downward-api-6691" to be "Succeeded or Failed" May 26 00:56:06.403: INFO: Pod "downwardapi-volume-c38fbecc-d8cd-448e-870e-092bec1265b1": Phase="Pending", Reason="", readiness=false. Elapsed: 58.214766ms May 26 00:56:08.633: INFO: Pod "downwardapi-volume-c38fbecc-d8cd-448e-870e-092bec1265b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287704176s May 26 00:56:10.637: INFO: Pod "downwardapi-volume-c38fbecc-d8cd-448e-870e-092bec1265b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.292528429s STEP: Saw pod success May 26 00:56:10.637: INFO: Pod "downwardapi-volume-c38fbecc-d8cd-448e-870e-092bec1265b1" satisfied condition "Succeeded or Failed" May 26 00:56:10.641: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c38fbecc-d8cd-448e-870e-092bec1265b1 container client-container: STEP: delete the pod May 26 00:56:10.698: INFO: Waiting for pod downwardapi-volume-c38fbecc-d8cd-448e-870e-092bec1265b1 to disappear May 26 00:56:10.719: INFO: Pod downwardapi-volume-c38fbecc-d8cd-448e-870e-092bec1265b1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:56:10.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6691" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":239,"skipped":3958,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:56:10.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5631.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5631.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5631.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 00:56:16.891: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:16.894: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:16.897: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:16.900: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:16.909: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:16.912: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:16.915: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:16.918: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:16.925: INFO: Lookups using dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local] May 26 00:56:21.931: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:21.934: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:21.937: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:21.940: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:21.949: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:21.953: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:21.956: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:21.959: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:21.966: INFO: Lookups using dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local] May 26 00:56:26.931: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:26.935: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:26.938: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:26.942: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:26.952: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:26.956: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:26.958: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:26.962: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:26.969: INFO: Lookups using dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local] May 26 00:56:31.990: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:31.994: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:31.997: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:32.001: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:32.009: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:32.012: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:32.015: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:32.017: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:32.023: INFO: Lookups using dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local] May 26 00:56:36.931: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:36.935: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:36.940: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:36.943: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:36.950: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:36.952: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:36.955: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:36.957: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:36.962: INFO: Lookups using dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local] May 26 00:56:41.930: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:41.934: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:41.938: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:41.942: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:41.952: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:41.956: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:41.959: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:41.962: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local from pod dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642: the server could not find the requested resource (get pods dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642) May 26 00:56:41.969: INFO: Lookups using dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5631.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5631.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5631.svc.cluster.local jessie_udp@dns-test-service-2.dns-5631.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5631.svc.cluster.local] May 26 00:56:46.991: INFO: DNS probes using dns-5631/dns-test-2a3b9601-1b3f-4b27-ab31-ee0748a62642 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:56:47.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5631" for this suite. • [SLOW TEST:36.419 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":240,"skipped":3970,"failed":0} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:56:47.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 26 00:56:47.650: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:56:56.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2016" for this suite. • [SLOW TEST:9.405 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":241,"skipped":3970,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:56:56.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 26 00:56:56.639: INFO: >>> kubeConfig: /root/.kube/config May 26 00:56:59.610: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:57:10.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3493" for this suite. • [SLOW TEST:13.749 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":242,"skipped":3991,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:57:10.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 26 00:57:10.387: INFO: Waiting up to 5m0s for pod "var-expansion-39cf776a-36f7-4895-9570-3c555e76906e" in namespace "var-expansion-7548" to be "Succeeded or Failed" May 26 00:57:10.398: INFO: Pod "var-expansion-39cf776a-36f7-4895-9570-3c555e76906e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.244948ms May 26 00:57:12.402: INFO: Pod "var-expansion-39cf776a-36f7-4895-9570-3c555e76906e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0153986s May 26 00:57:14.406: INFO: Pod "var-expansion-39cf776a-36f7-4895-9570-3c555e76906e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019399975s STEP: Saw pod success May 26 00:57:14.406: INFO: Pod "var-expansion-39cf776a-36f7-4895-9570-3c555e76906e" satisfied condition "Succeeded or Failed" May 26 00:57:14.409: INFO: Trying to get logs from node latest-worker pod var-expansion-39cf776a-36f7-4895-9570-3c555e76906e container dapi-container: STEP: delete the pod May 26 00:57:14.543: INFO: Waiting for pod var-expansion-39cf776a-36f7-4895-9570-3c555e76906e to disappear May 26 00:57:14.554: INFO: Pod var-expansion-39cf776a-36f7-4895-9570-3c555e76906e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:57:14.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7548" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":243,"skipped":3997,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:57:14.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-db17290a-a8db-4221-8908-44d1de42439b STEP: Creating a pod to test consume configMaps May 26 00:57:14.680: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2f05c32f-b2d0-4b8a-a658-6d6619160b63" in namespace "projected-3778" to be "Succeeded or Failed" May 26 00:57:14.705: INFO: Pod "pod-projected-configmaps-2f05c32f-b2d0-4b8a-a658-6d6619160b63": Phase="Pending", Reason="", readiness=false. Elapsed: 25.615749ms May 26 00:57:16.710: INFO: Pod "pod-projected-configmaps-2f05c32f-b2d0-4b8a-a658-6d6619160b63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030178945s May 26 00:57:18.715: INFO: Pod "pod-projected-configmaps-2f05c32f-b2d0-4b8a-a658-6d6619160b63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034908895s STEP: Saw pod success May 26 00:57:18.715: INFO: Pod "pod-projected-configmaps-2f05c32f-b2d0-4b8a-a658-6d6619160b63" satisfied condition "Succeeded or Failed" May 26 00:57:18.718: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2f05c32f-b2d0-4b8a-a658-6d6619160b63 container projected-configmap-volume-test: STEP: delete the pod May 26 00:57:18.756: INFO: Waiting for pod pod-projected-configmaps-2f05c32f-b2d0-4b8a-a658-6d6619160b63 to disappear May 26 00:57:18.763: INFO: Pod pod-projected-configmaps-2f05c32f-b2d0-4b8a-a658-6d6619160b63 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:57:18.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3778" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":244,"skipped":4007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:57:18.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 26 00:57:19.004: INFO: Created pod &Pod{ObjectMeta:{dns-8467 dns-8467 /api/v1/namespaces/dns-8467/pods/dns-8467 8609c514-8840-4172-92c3-43d0919c46e7 7699314 0 2020-05-26 00:57:19 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-26 00:57:18 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-52fs7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-52fs7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-52fs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 00:57:19.014: INFO: The status of Pod dns-8467 is Pending, waiting for it to be Running (with Ready = true) May 26 00:57:21.018: INFO: The status of Pod dns-8467 is Pending, waiting for it to be Running (with Ready = true) May 26 00:57:23.044: INFO: The status of Pod dns-8467 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 26 00:57:23.044: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8467 PodName:dns-8467 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 00:57:23.044: INFO: >>> kubeConfig: /root/.kube/config I0526 00:57:23.077060 7 log.go:172] (0xc00571c630) (0xc002024dc0) Create stream I0526 00:57:23.077091 7 log.go:172] (0xc00571c630) (0xc002024dc0) Stream added, broadcasting: 1 I0526 00:57:23.079293 7 log.go:172] (0xc00571c630) Reply frame received for 1 I0526 00:57:23.079341 7 log.go:172] (0xc00571c630) (0xc002483e00) Create stream I0526 00:57:23.079358 7 log.go:172] (0xc00571c630) (0xc002483e00) Stream added, broadcasting: 3 I0526 00:57:23.080424 7 log.go:172] (0xc00571c630) Reply frame received for 3 I0526 00:57:23.080446 7 log.go:172] (0xc00571c630) (0xc001baf360) Create stream I0526 00:57:23.080459 7 log.go:172] (0xc00571c630) (0xc001baf360) Stream added, broadcasting: 5 I0526 00:57:23.081502 7 log.go:172] (0xc00571c630) Reply frame received for 5 I0526 00:57:23.172765 7 log.go:172] (0xc00571c630) Data frame received for 3 I0526 00:57:23.172806 7 log.go:172] (0xc002483e00) (3) Data frame handling I0526 00:57:23.172827 7 log.go:172] (0xc002483e00) (3) Data frame sent I0526 00:57:23.174819 7 log.go:172] (0xc00571c630) Data frame received for 3 I0526 00:57:23.174847 7 log.go:172] (0xc002483e00) (3) Data frame handling I0526 00:57:23.174885 7 log.go:172] (0xc00571c630) Data frame received for 5 I0526 00:57:23.174903 7 log.go:172] (0xc001baf360) (5) Data frame handling I0526 00:57:23.177021 7 log.go:172] (0xc00571c630) Data frame received for 1 I0526 00:57:23.177047 7 log.go:172] (0xc002024dc0) (1) Data frame handling I0526 00:57:23.177067 7 log.go:172] (0xc002024dc0) (1) Data frame sent I0526 00:57:23.177087 7 log.go:172] (0xc00571c630) (0xc002024dc0) Stream removed, broadcasting: 1 I0526 00:57:23.177270 7 log.go:172] (0xc00571c630) Go away received I0526 00:57:23.177380 7 log.go:172] (0xc00571c630) (0xc002024dc0) Stream removed, broadcasting: 1 I0526 00:57:23.177402 7 log.go:172] (0xc00571c630) (0xc002483e00) Stream removed, broadcasting: 3 I0526 00:57:23.177410 7 log.go:172] (0xc00571c630) (0xc001baf360) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 26 00:57:23.177: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8467 PodName:dns-8467 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 00:57:23.177: INFO: >>> kubeConfig: /root/.kube/config I0526 00:57:23.207946 7 log.go:172] (0xc00571cc60) (0xc0020252c0) Create stream I0526 00:57:23.207980 7 log.go:172] (0xc00571cc60) (0xc0020252c0) Stream added, broadcasting: 1 I0526 00:57:23.210017 7 log.go:172] (0xc00571cc60) Reply frame received for 1 I0526 00:57:23.210047 7 log.go:172] (0xc00571cc60) (0xc0011b81e0) Create stream I0526 00:57:23.210057 7 log.go:172] (0xc00571cc60) (0xc0011b81e0) Stream added, broadcasting: 3 I0526 00:57:23.211041 7 log.go:172] (0xc00571cc60) Reply frame received for 3 I0526 00:57:23.211094 7 log.go:172] (0xc00571cc60) (0xc001baf400) Create stream I0526 00:57:23.211116 7 log.go:172] (0xc00571cc60) (0xc001baf400) Stream added, broadcasting: 5 I0526 00:57:23.212127 7 log.go:172] (0xc00571cc60) Reply frame received for 5 I0526 00:57:23.310261 7 log.go:172] (0xc00571cc60) Data frame received for 3 I0526 00:57:23.310384 7 log.go:172] (0xc0011b81e0) (3) Data frame handling I0526 00:57:23.310438 7 log.go:172] (0xc0011b81e0) (3) Data frame sent I0526 00:57:23.312421 7 log.go:172] (0xc00571cc60) Data frame received for 3 I0526 00:57:23.312447 7 log.go:172] (0xc0011b81e0) (3) Data frame handling I0526 00:57:23.312496 7 log.go:172] (0xc00571cc60) Data frame received for 5 I0526 00:57:23.312512 7 log.go:172] (0xc001baf400) (5) Data frame handling I0526 00:57:23.314901 7 log.go:172] (0xc00571cc60) Data frame received for 1 I0526 00:57:23.314914 7 log.go:172] (0xc0020252c0) (1) Data frame handling I0526 00:57:23.314920 7 log.go:172] (0xc0020252c0) (1) Data frame sent I0526 00:57:23.314929 7 log.go:172] (0xc00571cc60) (0xc0020252c0) Stream removed, broadcasting: 1 I0526 00:57:23.314974 7 log.go:172] (0xc00571cc60) Go away received I0526 00:57:23.315011 7 log.go:172] (0xc00571cc60) (0xc0020252c0) Stream removed, broadcasting: 1 I0526 00:57:23.315022 7 log.go:172] (0xc00571cc60) (0xc0011b81e0) Stream removed, broadcasting: 3 I0526 00:57:23.315043 7 log.go:172] (0xc00571cc60) (0xc001baf400) Stream removed, broadcasting: 5 May 26 00:57:23.315: INFO: Deleting pod dns-8467... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:57:23.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8467" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":245,"skipped":4032,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:57:23.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-2855 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2855 STEP: Deleting pre-stop pod May 26 00:57:38.935: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:57:38.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2855" for this suite. • [SLOW TEST:15.611 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":246,"skipped":4077,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:57:38.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7160 STEP: creating service affinity-nodeport in namespace services-7160 STEP: creating replication controller affinity-nodeport in namespace services-7160 I0526 00:57:39.490206 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-7160, replica count: 3 I0526 00:57:42.540618 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:57:45.540820 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 00:57:45.550: INFO: Creating new exec pod May 26 00:57:50.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7160 execpod-affinityh5dsm -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 26 00:57:50.817: INFO: stderr: "I0526 00:57:50.710304 3188 log.go:172] (0xc000afb810) (0xc000b163c0) Create stream\nI0526 00:57:50.710368 3188 log.go:172] (0xc000afb810) (0xc000b163c0) Stream added, broadcasting: 1\nI0526 00:57:50.714858 3188 log.go:172] (0xc000afb810) Reply frame received for 1\nI0526 00:57:50.714945 3188 log.go:172] (0xc000afb810) (0xc0006a41e0) Create stream\nI0526 00:57:50.714977 3188 log.go:172] (0xc000afb810) (0xc0006a41e0) Stream added, broadcasting: 3\nI0526 00:57:50.716095 3188 log.go:172] (0xc000afb810) Reply frame received for 3\nI0526 00:57:50.716122 3188 log.go:172] (0xc000afb810) (0xc0006a5180) Create stream\nI0526 00:57:50.716137 3188 log.go:172] (0xc000afb810) (0xc0006a5180) Stream added, broadcasting: 5\nI0526 00:57:50.717318 3188 log.go:172] (0xc000afb810) Reply frame received for 5\nI0526 00:57:50.810144 3188 log.go:172] (0xc000afb810) Data frame received for 5\nI0526 00:57:50.810176 3188 log.go:172] (0xc0006a5180) (5) Data frame handling\nI0526 00:57:50.810186 3188 log.go:172] (0xc0006a5180) (5) Data frame sent\nI0526 00:57:50.810195 3188 log.go:172] (0xc000afb810) Data frame received for 5\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0526 00:57:50.810235 3188 log.go:172] (0xc000afb810) Data frame received for 3\nI0526 00:57:50.810309 3188 log.go:172] (0xc0006a41e0) (3) Data frame handling\nI0526 00:57:50.810349 3188 log.go:172] (0xc0006a5180) (5) Data frame handling\nI0526 00:57:50.811785 3188 log.go:172] (0xc000afb810) Data frame received for 1\nI0526 00:57:50.811803 3188 log.go:172] (0xc000b163c0) (1) Data frame handling\nI0526 00:57:50.811811 3188 log.go:172] (0xc000b163c0) (1) Data frame sent\nI0526 00:57:50.811825 3188 log.go:172] (0xc000afb810) (0xc000b163c0) Stream removed, broadcasting: 1\nI0526 00:57:50.811836 3188 log.go:172] (0xc000afb810) Go away received\nI0526 00:57:50.812203 3188 log.go:172] (0xc000afb810) (0xc000b163c0) Stream removed, broadcasting: 1\nI0526 00:57:50.812221 3188 log.go:172] (0xc000afb810) (0xc0006a41e0) Stream removed, broadcasting: 3\nI0526 00:57:50.812229 3188 log.go:172] (0xc000afb810) (0xc0006a5180) Stream removed, broadcasting: 5\n" May 26 00:57:50.817: INFO: stdout: "" May 26 00:57:50.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7160 execpod-affinityh5dsm -- /bin/sh -x -c nc -zv -t -w 2 10.106.45.81 80' May 26 00:57:51.004: INFO: stderr: "I0526 00:57:50.940045 3208 log.go:172] (0xc0006ea790) (0xc000320640) Create stream\nI0526 00:57:50.940088 3208 log.go:172] (0xc0006ea790) (0xc000320640) Stream added, broadcasting: 1\nI0526 00:57:50.942280 3208 log.go:172] (0xc0006ea790) Reply frame received for 1\nI0526 00:57:50.942333 3208 log.go:172] (0xc0006ea790) (0xc000b02000) Create stream\nI0526 00:57:50.942352 3208 log.go:172] (0xc0006ea790) (0xc000b02000) Stream added, broadcasting: 3\nI0526 00:57:50.943085 3208 log.go:172] (0xc0006ea790) Reply frame received for 3\nI0526 00:57:50.943129 3208 log.go:172] (0xc0006ea790) (0xc000706640) Create stream\nI0526 00:57:50.943141 3208 log.go:172] (0xc0006ea790) (0xc000706640) Stream added, broadcasting: 5\nI0526 00:57:50.943964 3208 log.go:172] (0xc0006ea790) Reply frame received for 5\nI0526 00:57:50.995349 3208 log.go:172] (0xc0006ea790) Data frame received for 3\nI0526 00:57:50.995392 3208 log.go:172] (0xc000b02000) (3) Data frame handling\nI0526 00:57:50.995415 3208 log.go:172] (0xc0006ea790) Data frame received for 5\nI0526 00:57:50.995425 3208 log.go:172] (0xc000706640) (5) Data frame handling\nI0526 00:57:50.995436 3208 log.go:172] (0xc000706640) (5) Data frame sent\nI0526 00:57:50.995446 3208 log.go:172] (0xc0006ea790) Data frame received for 5\nI0526 00:57:50.995455 3208 log.go:172] (0xc000706640) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.45.81 80\nConnection to 10.106.45.81 80 port [tcp/http] succeeded!\nI0526 00:57:50.996995 3208 log.go:172] (0xc0006ea790) Data frame received for 1\nI0526 00:57:50.997072 3208 log.go:172] (0xc000320640) (1) Data frame handling\nI0526 00:57:50.997104 3208 log.go:172] (0xc000320640) (1) Data frame sent\nI0526 00:57:50.997257 3208 log.go:172] (0xc0006ea790) (0xc000320640) Stream removed, broadcasting: 1\nI0526 00:57:50.997275 3208 log.go:172] (0xc0006ea790) Go away received\nI0526 00:57:50.997732 3208 log.go:172] (0xc0006ea790) (0xc000320640) Stream removed, broadcasting: 1\nI0526 00:57:50.997757 3208 log.go:172] (0xc0006ea790) (0xc000b02000) Stream removed, broadcasting: 3\nI0526 00:57:50.997779 3208 log.go:172] (0xc0006ea790) (0xc000706640) Stream removed, broadcasting: 5\n" May 26 00:57:51.004: INFO: stdout: "" May 26 00:57:51.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7160 execpod-affinityh5dsm -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32728' May 26 00:57:51.217: INFO: stderr: "I0526 00:57:51.134932 3232 log.go:172] (0xc000a50000) (0xc00053cd20) Create stream\nI0526 00:57:51.135006 3232 log.go:172] (0xc000a50000) (0xc00053cd20) Stream added, broadcasting: 1\nI0526 00:57:51.138251 3232 log.go:172] (0xc000a50000) Reply frame received for 1\nI0526 00:57:51.138297 3232 log.go:172] (0xc000a50000) (0xc000522460) Create stream\nI0526 00:57:51.138312 3232 log.go:172] (0xc000a50000) (0xc000522460) Stream added, broadcasting: 3\nI0526 00:57:51.139601 3232 log.go:172] (0xc000a50000) Reply frame received for 3\nI0526 00:57:51.139657 3232 log.go:172] (0xc000a50000) (0xc000308a00) Create stream\nI0526 00:57:51.139678 3232 log.go:172] (0xc000a50000) (0xc000308a00) Stream added, broadcasting: 5\nI0526 00:57:51.140869 3232 log.go:172] (0xc000a50000) Reply frame received for 5\nI0526 00:57:51.205295 3232 log.go:172] (0xc000a50000) Data frame received for 3\nI0526 00:57:51.205327 3232 log.go:172] (0xc000522460) (3) Data frame handling\nI0526 00:57:51.205544 3232 log.go:172] (0xc000a50000) Data frame received for 5\nI0526 00:57:51.205583 3232 log.go:172] (0xc000308a00) (5) Data frame handling\nI0526 00:57:51.205614 3232 log.go:172] (0xc000308a00) (5) Data frame sent\nI0526 00:57:51.205628 3232 log.go:172] (0xc000a50000) Data frame received for 5\nI0526 00:57:51.205657 3232 log.go:172] (0xc000308a00) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32728\nConnection to 172.17.0.13 32728 port [tcp/32728] succeeded!\nI0526 00:57:51.207606 3232 log.go:172] (0xc000a50000) Data frame received for 1\nI0526 00:57:51.207628 3232 log.go:172] (0xc00053cd20) (1) Data frame handling\nI0526 00:57:51.207643 3232 log.go:172] (0xc00053cd20) (1) Data frame sent\nI0526 00:57:51.207659 3232 log.go:172] (0xc000a50000) (0xc00053cd20) Stream removed, broadcasting: 1\nI0526 00:57:51.207964 3232 log.go:172] (0xc000a50000) Go away received\nI0526 00:57:51.208027 3232 log.go:172] (0xc000a50000) (0xc00053cd20) Stream removed, broadcasting: 1\nI0526 00:57:51.208042 3232 log.go:172] (0xc000a50000) (0xc000522460) Stream removed, broadcasting: 3\nI0526 00:57:51.208049 3232 log.go:172] (0xc000a50000) (0xc000308a00) Stream removed, broadcasting: 5\n" May 26 00:57:51.217: INFO: stdout: "" May 26 00:57:51.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7160 execpod-affinityh5dsm -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32728' May 26 00:57:51.437: INFO: stderr: "I0526 00:57:51.353961 3253 log.go:172] (0xc00090cbb0) (0xc0009540a0) Create stream\nI0526 00:57:51.354012 3253 log.go:172] (0xc00090cbb0) (0xc0009540a0) Stream added, broadcasting: 1\nI0526 00:57:51.358347 3253 log.go:172] (0xc00090cbb0) Reply frame received for 1\nI0526 00:57:51.358394 3253 log.go:172] (0xc00090cbb0) (0xc000869ea0) Create stream\nI0526 00:57:51.358410 3253 log.go:172] (0xc00090cbb0) (0xc000869ea0) Stream added, broadcasting: 3\nI0526 00:57:51.359608 3253 log.go:172] (0xc00090cbb0) Reply frame received for 3\nI0526 00:57:51.359658 3253 log.go:172] (0xc00090cbb0) (0xc000558d20) Create stream\nI0526 00:57:51.359677 3253 log.go:172] (0xc00090cbb0) (0xc000558d20) Stream added, broadcasting: 5\nI0526 00:57:51.360801 3253 log.go:172] (0xc00090cbb0) Reply frame received for 5\nI0526 00:57:51.427225 3253 log.go:172] (0xc00090cbb0) Data frame received for 5\nI0526 00:57:51.427259 3253 log.go:172] (0xc000558d20) (5) Data frame handling\nI0526 00:57:51.427280 3253 log.go:172] (0xc000558d20) (5) Data frame sent\nI0526 00:57:51.427292 3253 log.go:172] (0xc00090cbb0) Data frame received for 5\nI0526 00:57:51.427301 3253 log.go:172] (0xc000558d20) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32728\nConnection to 172.17.0.12 32728 port [tcp/32728] succeeded!\nI0526 00:57:51.427340 3253 log.go:172] (0xc000558d20) (5) Data frame sent\nI0526 00:57:51.427770 3253 log.go:172] (0xc00090cbb0) Data frame received for 3\nI0526 00:57:51.427792 3253 log.go:172] (0xc000869ea0) (3) Data frame handling\nI0526 00:57:51.428005 3253 log.go:172] (0xc00090cbb0) Data frame received for 5\nI0526 00:57:51.428024 3253 log.go:172] (0xc000558d20) (5) Data frame handling\nI0526 00:57:51.429885 3253 log.go:172] (0xc00090cbb0) Data frame received for 1\nI0526 00:57:51.429915 3253 log.go:172] (0xc0009540a0) (1) Data frame handling\nI0526 00:57:51.429939 3253 log.go:172] (0xc0009540a0) (1) Data frame sent\nI0526 00:57:51.429969 3253 log.go:172] (0xc00090cbb0) (0xc0009540a0) Stream removed, broadcasting: 1\nI0526 00:57:51.429996 3253 log.go:172] (0xc00090cbb0) Go away received\nI0526 00:57:51.430315 3253 log.go:172] (0xc00090cbb0) (0xc0009540a0) Stream removed, broadcasting: 1\nI0526 00:57:51.430383 3253 log.go:172] (0xc00090cbb0) (0xc000869ea0) Stream removed, broadcasting: 3\nI0526 00:57:51.430400 3253 log.go:172] (0xc00090cbb0) (0xc000558d20) Stream removed, broadcasting: 5\n" May 26 00:57:51.437: INFO: stdout: "" May 26 00:57:51.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7160 execpod-affinityh5dsm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32728/ ; done' May 26 00:57:51.794: INFO: stderr: "I0526 00:57:51.579564 3275 log.go:172] (0xc00055c790) (0xc0004e4fa0) Create stream\nI0526 00:57:51.579628 3275 log.go:172] (0xc00055c790) (0xc0004e4fa0) Stream added, broadcasting: 1\nI0526 00:57:51.583077 3275 log.go:172] (0xc00055c790) Reply frame received for 1\nI0526 00:57:51.583137 3275 log.go:172] (0xc00055c790) (0xc00058c5a0) Create stream\nI0526 00:57:51.583164 3275 log.go:172] (0xc00055c790) (0xc00058c5a0) Stream added, broadcasting: 3\nI0526 00:57:51.584446 3275 log.go:172] (0xc00055c790) Reply frame received for 3\nI0526 00:57:51.584473 3275 log.go:172] (0xc00055c790) (0xc0002741e0) Create stream\nI0526 00:57:51.584481 3275 log.go:172] (0xc00055c790) (0xc0002741e0) Stream added, broadcasting: 5\nI0526 00:57:51.585985 3275 log.go:172] (0xc00055c790) Reply frame received for 5\nI0526 00:57:51.642009 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.642044 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.642058 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.642077 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.642086 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.642096 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.697843 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.697919 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.697958 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.698363 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.698390 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.698403 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.698435 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.698587 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.698631 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.706666 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.706695 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.706724 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.707056 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.707079 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.707107 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.707130 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.707160 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.707182 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\nI0526 00:57:51.707200 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.707217 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.707243 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\nI0526 00:57:51.713999 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.714011 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.714016 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.714586 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.714604 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.714616 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\nI0526 00:57:51.714625 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.714637 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.714646 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.714651 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.714659 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.714679 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\nI0526 00:57:51.720304 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.720330 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.720350 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.720809 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.720836 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.720854 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.720885 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.720899 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.720906 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.726194 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.726222 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.726251 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.726820 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.726841 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.726849 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.726869 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.726896 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.726916 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.731745 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.731763 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.731782 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.732157 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.732177 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.732192 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.732205 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.732211 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.732217 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.736785 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.736803 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.736818 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.737441 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.737578 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.737592 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.737827 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.737843 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.737858 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.743993 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.744021 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.744047 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.744649 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.744681 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.744699 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.744721 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.744735 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.744757 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.749864 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.749889 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.749911 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.750186 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.750223 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.750245 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.750271 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.750461 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.750474 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.754592 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.754604 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.754610 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.755527 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.755570 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.755598 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.755642 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.755663 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.755675 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.758867 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.758893 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.758913 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.759267 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.759289 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.759307 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.759354 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.759372 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.759392 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.762966 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.762977 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.762983 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.764038 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.764053 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.764070 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.764174 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.764212 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.764246 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.770534 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.770564 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.770579 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.771064 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.771078 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.771085 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\nI0526 00:57:51.771091 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.771097 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.771114 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\nI0526 00:57:51.771124 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.771132 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.771138 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.775119 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.775158 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.775198 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.775495 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.775512 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.775519 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.775529 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.775534 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.775539 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\nI0526 00:57:51.775544 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.775549 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.775558 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\nI0526 00:57:51.779273 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.779308 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.779338 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.780024 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.780087 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.780107 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.780125 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.780135 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.780145 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\nI0526 00:57:51.780154 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.780162 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32728/\nI0526 00:57:51.780176 3275 log.go:172] (0xc0002741e0) (5) Data frame sent\nI0526 00:57:51.784954 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.784969 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.784982 3275 log.go:172] (0xc00058c5a0) (3) Data frame sent\nI0526 00:57:51.785884 3275 log.go:172] (0xc00055c790) Data frame received for 3\nI0526 00:57:51.785914 3275 log.go:172] (0xc00058c5a0) (3) Data frame handling\nI0526 00:57:51.785945 3275 log.go:172] (0xc00055c790) Data frame received for 5\nI0526 00:57:51.785964 3275 log.go:172] (0xc0002741e0) (5) Data frame handling\nI0526 00:57:51.787959 3275 log.go:172] (0xc00055c790) Data frame received for 1\nI0526 00:57:51.787996 3275 log.go:172] (0xc0004e4fa0) (1) Data frame handling\nI0526 00:57:51.788011 3275 log.go:172] (0xc0004e4fa0) (1) Data frame sent\nI0526 00:57:51.788024 3275 log.go:172] (0xc00055c790) (0xc0004e4fa0) Stream removed, broadcasting: 1\nI0526 00:57:51.788045 3275 log.go:172] (0xc00055c790) Go away received\nI0526 00:57:51.788402 3275 log.go:172] (0xc00055c790) (0xc0004e4fa0) Stream removed, broadcasting: 1\nI0526 00:57:51.788418 3275 log.go:172] (0xc00055c790) (0xc00058c5a0) Stream removed, broadcasting: 3\nI0526 00:57:51.788426 3275 log.go:172] (0xc00055c790) (0xc0002741e0) Stream removed, broadcasting: 5\n" May 26 00:57:51.795: INFO: stdout: "\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw\naffinity-nodeport-mngxw" May 26 00:57:51.795: INFO: Received response from host: May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Received response from host: affinity-nodeport-mngxw May 26 00:57:51.795: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-7160, will wait for the garbage collector to delete the pods May 26 00:57:51.928: INFO: Deleting ReplicationController affinity-nodeport took: 6.973801ms May 26 00:57:52.328: INFO: Terminating ReplicationController affinity-nodeport pods took: 400.404294ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:58:05.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7160" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:26.662 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":247,"skipped":4090,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:58:05.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-3c3a1b83-307d-4f2a-a8b8-e34c4cad2d3d STEP: Creating the pod STEP: Updating configmap configmap-test-upd-3c3a1b83-307d-4f2a-a8b8-e34c4cad2d3d STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:58:11.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7332" for this suite. • [SLOW TEST:6.455 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":248,"skipped":4103,"failed":0} SSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:58:12.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8686 May 26 00:58:16.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8686 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 26 00:58:16.484: INFO: stderr: "I0526 00:58:16.378343 3297 log.go:172] (0xc000bab970) (0xc000725b80) Create stream\nI0526 00:58:16.378400 3297 log.go:172] (0xc000bab970) (0xc000725b80) Stream added, broadcasting: 1\nI0526 00:58:16.380306 3297 log.go:172] (0xc000bab970) Reply frame received for 1\nI0526 00:58:16.380363 3297 log.go:172] (0xc000bab970) (0xc0006b2f00) Create stream\nI0526 00:58:16.380389 3297 log.go:172] (0xc000bab970) (0xc0006b2f00) Stream added, broadcasting: 3\nI0526 00:58:16.381618 3297 log.go:172] (0xc000bab970) Reply frame received for 3\nI0526 00:58:16.381650 3297 log.go:172] (0xc000bab970) (0xc0007345a0) Create stream\nI0526 00:58:16.381660 3297 log.go:172] (0xc000bab970) (0xc0007345a0) Stream added, broadcasting: 5\nI0526 00:58:16.382444 3297 log.go:172] (0xc000bab970) Reply frame received for 5\nI0526 00:58:16.473695 3297 log.go:172] (0xc000bab970) Data frame received for 5\nI0526 00:58:16.473729 3297 log.go:172] (0xc0007345a0) (5) Data frame handling\nI0526 00:58:16.473749 3297 log.go:172] (0xc0007345a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0526 00:58:16.475994 3297 log.go:172] (0xc000bab970) Data frame received for 3\nI0526 00:58:16.476018 3297 log.go:172] (0xc0006b2f00) (3) Data frame handling\nI0526 00:58:16.476044 3297 log.go:172] (0xc0006b2f00) (3) Data frame sent\nI0526 00:58:16.476490 3297 log.go:172] (0xc000bab970) Data frame received for 3\nI0526 00:58:16.476516 3297 log.go:172] (0xc0006b2f00) (3) Data frame handling\nI0526 00:58:16.476583 3297 log.go:172] (0xc000bab970) Data frame received for 5\nI0526 00:58:16.476598 3297 log.go:172] (0xc0007345a0) (5) Data frame handling\nI0526 00:58:16.478687 3297 log.go:172] (0xc000bab970) Data frame received for 1\nI0526 00:58:16.478708 3297 log.go:172] (0xc000725b80) (1) Data frame handling\nI0526 00:58:16.478721 3297 log.go:172] (0xc000725b80) (1) Data frame sent\nI0526 00:58:16.478738 3297 log.go:172] (0xc000bab970) (0xc000725b80) Stream removed, broadcasting: 1\nI0526 00:58:16.478752 3297 log.go:172] (0xc000bab970) Go away received\nI0526 00:58:16.479403 3297 log.go:172] (0xc000bab970) (0xc000725b80) Stream removed, broadcasting: 1\nI0526 00:58:16.479428 3297 log.go:172] (0xc000bab970) (0xc0006b2f00) Stream removed, broadcasting: 3\nI0526 00:58:16.479440 3297 log.go:172] (0xc000bab970) (0xc0007345a0) Stream removed, broadcasting: 5\n" May 26 00:58:16.484: INFO: stdout: "iptables" May 26 00:58:16.484: INFO: proxyMode: iptables May 26 00:58:16.497: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 26 00:58:16.546: INFO: Pod kube-proxy-mode-detector still exists May 26 00:58:18.546: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 26 00:58:18.550: INFO: Pod kube-proxy-mode-detector still exists May 26 00:58:20.546: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 26 00:58:20.551: INFO: Pod kube-proxy-mode-detector still exists May 26 00:58:22.546: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 26 00:58:22.551: INFO: Pod kube-proxy-mode-detector still exists May 26 00:58:24.546: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 26 00:58:24.551: INFO: Pod kube-proxy-mode-detector still exists May 26 00:58:26.547: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 26 00:58:26.551: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-8686 STEP: creating replication controller affinity-clusterip-timeout in namespace services-8686 I0526 00:58:26.691728 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-8686, replica count: 3 I0526 00:58:29.742141 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 00:58:32.742430 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 00:58:32.749: INFO: Creating new exec pod May 26 00:58:37.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8686 execpod-affinityb2csd -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 26 00:58:37.952: INFO: stderr: "I0526 00:58:37.887404 3318 log.go:172] (0xc000b81760) (0xc000842e60) Create stream\nI0526 00:58:37.887450 3318 log.go:172] (0xc000b81760) (0xc000842e60) Stream added, broadcasting: 1\nI0526 00:58:37.889396 3318 log.go:172] (0xc000b81760) Reply frame received for 1\nI0526 00:58:37.889419 3318 log.go:172] (0xc000b81760) (0xc000848dc0) Create stream\nI0526 00:58:37.889425 3318 log.go:172] (0xc000b81760) (0xc000848dc0) Stream added, broadcasting: 3\nI0526 00:58:37.890058 3318 log.go:172] (0xc000b81760) Reply frame received for 3\nI0526 00:58:37.890102 3318 log.go:172] (0xc000b81760) (0xc000843400) Create stream\nI0526 00:58:37.890123 3318 log.go:172] (0xc000b81760) (0xc000843400) Stream added, broadcasting: 5\nI0526 00:58:37.890835 3318 log.go:172] (0xc000b81760) Reply frame received for 5\nI0526 00:58:37.946492 3318 log.go:172] (0xc000b81760) Data frame received for 3\nI0526 00:58:37.946524 3318 log.go:172] (0xc000848dc0) (3) Data frame handling\nI0526 00:58:37.946595 3318 log.go:172] (0xc000b81760) Data frame received for 5\nI0526 00:58:37.946614 3318 log.go:172] (0xc000843400) (5) Data frame handling\nI0526 00:58:37.946632 3318 log.go:172] (0xc000843400) (5) Data frame sent\nI0526 00:58:37.946642 3318 log.go:172] (0xc000b81760) Data frame received for 5\nI0526 00:58:37.946647 3318 log.go:172] (0xc000843400) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0526 00:58:37.947728 3318 log.go:172] (0xc000b81760) Data frame received for 1\nI0526 00:58:37.947745 3318 log.go:172] (0xc000842e60) (1) Data frame handling\nI0526 00:58:37.947769 3318 log.go:172] (0xc000842e60) (1) Data frame sent\nI0526 00:58:37.947861 3318 log.go:172] (0xc000b81760) (0xc000842e60) Stream removed, broadcasting: 1\nI0526 00:58:37.947890 3318 log.go:172] (0xc000b81760) Go away received\nI0526 00:58:37.948148 3318 log.go:172] (0xc000b81760) (0xc000842e60) Stream removed, broadcasting: 1\nI0526 00:58:37.948165 3318 log.go:172] (0xc000b81760) (0xc000848dc0) Stream removed, broadcasting: 3\nI0526 00:58:37.948183 3318 log.go:172] (0xc000b81760) (0xc000843400) Stream removed, broadcasting: 5\n" May 26 00:58:37.952: INFO: stdout: "" May 26 00:58:37.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8686 execpod-affinityb2csd -- /bin/sh -x -c nc -zv -t -w 2 10.104.82.97 80' May 26 00:58:38.129: INFO: stderr: "I0526 00:58:38.067873 3338 log.go:172] (0xc0009ba790) (0xc000240140) Create stream\nI0526 00:58:38.067921 3338 log.go:172] (0xc0009ba790) (0xc000240140) Stream added, broadcasting: 1\nI0526 00:58:38.070678 3338 log.go:172] (0xc0009ba790) Reply frame received for 1\nI0526 00:58:38.070730 3338 log.go:172] (0xc0009ba790) (0xc0004c01e0) Create stream\nI0526 00:58:38.070744 3338 log.go:172] (0xc0009ba790) (0xc0004c01e0) Stream added, broadcasting: 3\nI0526 00:58:38.071592 3338 log.go:172] (0xc0009ba790) Reply frame received for 3\nI0526 00:58:38.071645 3338 log.go:172] (0xc0009ba790) (0xc000240820) Create stream\nI0526 00:58:38.071658 3338 log.go:172] (0xc0009ba790) (0xc000240820) Stream added, broadcasting: 5\nI0526 00:58:38.072636 3338 log.go:172] (0xc0009ba790) Reply frame received for 5\nI0526 00:58:38.121275 3338 log.go:172] (0xc0009ba790) Data frame received for 3\nI0526 00:58:38.121306 3338 log.go:172] (0xc0004c01e0) (3) Data frame handling\nI0526 00:58:38.121569 3338 log.go:172] (0xc0009ba790) Data frame received for 5\nI0526 00:58:38.121598 3338 log.go:172] (0xc000240820) (5) Data frame handling\nI0526 00:58:38.121627 3338 log.go:172] (0xc000240820) (5) Data frame sent\nI0526 00:58:38.121644 3338 log.go:172] (0xc0009ba790) Data frame received for 5\nI0526 00:58:38.121658 3338 log.go:172] (0xc000240820) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.82.97 80\nConnection to 10.104.82.97 80 port [tcp/http] succeeded!\nI0526 00:58:38.123249 3338 log.go:172] (0xc0009ba790) Data frame received for 1\nI0526 00:58:38.123273 3338 log.go:172] (0xc000240140) (1) Data frame handling\nI0526 00:58:38.123292 3338 log.go:172] (0xc000240140) (1) Data frame sent\nI0526 00:58:38.123310 3338 log.go:172] (0xc0009ba790) (0xc000240140) Stream removed, broadcasting: 1\nI0526 00:58:38.123361 3338 log.go:172] (0xc0009ba790) Go away received\nI0526 00:58:38.123584 3338 log.go:172] (0xc0009ba790) (0xc000240140) Stream removed, broadcasting: 1\nI0526 00:58:38.123604 3338 log.go:172] (0xc0009ba790) (0xc0004c01e0) Stream removed, broadcasting: 3\nI0526 00:58:38.123615 3338 log.go:172] (0xc0009ba790) (0xc000240820) Stream removed, broadcasting: 5\n" May 26 00:58:38.129: INFO: stdout: "" May 26 00:58:38.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8686 execpod-affinityb2csd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.82.97:80/ ; done' May 26 00:58:38.483: INFO: stderr: "I0526 00:58:38.252804 3360 log.go:172] (0xc00098e8f0) (0xc000aae320) Create stream\nI0526 00:58:38.252856 3360 log.go:172] (0xc00098e8f0) (0xc000aae320) Stream added, broadcasting: 1\nI0526 00:58:38.257574 3360 log.go:172] (0xc00098e8f0) Reply frame received for 1\nI0526 00:58:38.257628 3360 log.go:172] (0xc00098e8f0) (0xc0006ac5a0) Create stream\nI0526 00:58:38.257642 3360 log.go:172] (0xc00098e8f0) (0xc0006ac5a0) Stream added, broadcasting: 3\nI0526 00:58:38.258666 3360 log.go:172] (0xc00098e8f0) Reply frame received for 3\nI0526 00:58:38.258700 3360 log.go:172] (0xc00098e8f0) (0xc0006ace60) Create stream\nI0526 00:58:38.258713 3360 log.go:172] (0xc00098e8f0) (0xc0006ace60) Stream added, broadcasting: 5\nI0526 00:58:38.259737 3360 log.go:172] (0xc00098e8f0) Reply frame received for 5\nI0526 00:58:38.325490 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.325613 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.325632 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.325654 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.325665 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.325676 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.393543 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.393564 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.393577 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.394369 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.394388 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.394402 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\n+ echo\n+ curl -q -sI0526 00:58:38.394422 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.394443 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.394461 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.394476 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.394488 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.394497 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\n --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.401396 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.401410 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.401423 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.402145 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.402163 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.402173 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.402181 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.402187 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.402194 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.402198 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.402203 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.402214 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.407842 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.407857 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.407869 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.408600 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.408617 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.408640 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.408653 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.408662 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.408675 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.416591 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.416616 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.416651 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.417041 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.417061 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.417071 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.417082 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.417094 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.417103 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.417205 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.417216 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/I0526 00:58:38.417227 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.417591 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.417612 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.417805 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\n\nI0526 00:58:38.420440 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.420465 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.420483 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.420853 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.420870 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.420886 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.420896 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.420905 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.420919 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.420983 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.421003 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.421026 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.425492 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.425508 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.425523 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.426108 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.426129 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.426148 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.426172 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.426184 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.426200 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.430330 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.430384 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.430509 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.431018 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.431056 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.431075 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.431114 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.431141 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.431167 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.435704 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.435737 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.435772 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.436057 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.436083 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.436103 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.436131 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.436145 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.436156 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.439542 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.439555 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.439562 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.439929 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.439947 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.439969 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.440013 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.440055 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.440079 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.443169 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.443184 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.443191 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.443641 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.443657 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.443676 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.443705 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.443717 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.443741 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.447818 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.447834 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.447842 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.448328 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.448350 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.448373 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.448413 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.448442 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.448485 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.452093 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.452125 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.452158 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.452470 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.452500 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.452514 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.452531 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.452543 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.452553 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.456894 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.456908 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.456918 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.457695 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.457734 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.457783 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.457808 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeoutI0526 00:58:38.457825 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.457858 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.457880 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\n 2 http://10.104.82.97:80/\nI0526 00:58:38.457898 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.457924 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.462089 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.462109 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.462126 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.462560 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.462586 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.462617 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.462637 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.462675 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.462719 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.467964 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.467983 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.468001 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.468514 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.468532 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.468547 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.468555 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.468561 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.468575 3360 log.go:172] (0xc0006ace60) (5) Data frame sent\nI0526 00:58:38.468632 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.468647 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.468665 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.473035 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.473055 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.473074 3360 log.go:172] (0xc0006ac5a0) (3) Data frame sent\nI0526 00:58:38.474128 3360 log.go:172] (0xc00098e8f0) Data frame received for 3\nI0526 00:58:38.474151 3360 log.go:172] (0xc0006ac5a0) (3) Data frame handling\nI0526 00:58:38.474180 3360 log.go:172] (0xc00098e8f0) Data frame received for 5\nI0526 00:58:38.474220 3360 log.go:172] (0xc0006ace60) (5) Data frame handling\nI0526 00:58:38.475716 3360 log.go:172] (0xc00098e8f0) Data frame received for 1\nI0526 00:58:38.475819 3360 log.go:172] (0xc000aae320) (1) Data frame handling\nI0526 00:58:38.475925 3360 log.go:172] (0xc000aae320) (1) Data frame sent\nI0526 00:58:38.476214 3360 log.go:172] (0xc00098e8f0) (0xc000aae320) Stream removed, broadcasting: 1\nI0526 00:58:38.476248 3360 log.go:172] (0xc00098e8f0) Go away received\nI0526 00:58:38.476643 3360 log.go:172] (0xc00098e8f0) (0xc000aae320) Stream removed, broadcasting: 1\nI0526 00:58:38.476671 3360 log.go:172] (0xc00098e8f0) (0xc0006ac5a0) Stream removed, broadcasting: 3\nI0526 00:58:38.476685 3360 log.go:172] (0xc00098e8f0) (0xc0006ace60) Stream removed, broadcasting: 5\n" May 26 00:58:38.483: INFO: stdout: "\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v\naffinity-clusterip-timeout-9sw7v" May 26 00:58:38.483: INFO: Received response from host: May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.483: INFO: Received response from host: affinity-clusterip-timeout-9sw7v May 26 00:58:38.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8686 execpod-affinityb2csd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.82.97:80/' May 26 00:58:38.685: INFO: stderr: "I0526 00:58:38.617555 3382 log.go:172] (0xc000c40000) (0xc00056cdc0) Create stream\nI0526 00:58:38.617616 3382 log.go:172] (0xc000c40000) (0xc00056cdc0) Stream added, broadcasting: 1\nI0526 00:58:38.620322 3382 log.go:172] (0xc000c40000) Reply frame received for 1\nI0526 00:58:38.620371 3382 log.go:172] (0xc000c40000) (0xc0000f3b80) Create stream\nI0526 00:58:38.620386 3382 log.go:172] (0xc000c40000) (0xc0000f3b80) Stream added, broadcasting: 3\nI0526 00:58:38.621650 3382 log.go:172] (0xc000c40000) Reply frame received for 3\nI0526 00:58:38.621703 3382 log.go:172] (0xc000c40000) (0xc00068e280) Create stream\nI0526 00:58:38.621716 3382 log.go:172] (0xc000c40000) (0xc00068e280) Stream added, broadcasting: 5\nI0526 00:58:38.622717 3382 log.go:172] (0xc000c40000) Reply frame received for 5\nI0526 00:58:38.673245 3382 log.go:172] (0xc000c40000) Data frame received for 5\nI0526 00:58:38.673271 3382 log.go:172] (0xc00068e280) (5) Data frame handling\nI0526 00:58:38.673284 3382 log.go:172] (0xc00068e280) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:38.677248 3382 log.go:172] (0xc000c40000) Data frame received for 3\nI0526 00:58:38.677280 3382 log.go:172] (0xc0000f3b80) (3) Data frame handling\nI0526 00:58:38.677303 3382 log.go:172] (0xc0000f3b80) (3) Data frame sent\nI0526 00:58:38.677715 3382 log.go:172] (0xc000c40000) Data frame received for 3\nI0526 00:58:38.677792 3382 log.go:172] (0xc0000f3b80) (3) Data frame handling\nI0526 00:58:38.677854 3382 log.go:172] (0xc000c40000) Data frame received for 5\nI0526 00:58:38.677882 3382 log.go:172] (0xc00068e280) (5) Data frame handling\nI0526 00:58:38.679507 3382 log.go:172] (0xc000c40000) Data frame received for 1\nI0526 00:58:38.679545 3382 log.go:172] (0xc00056cdc0) (1) Data frame handling\nI0526 00:58:38.679560 3382 log.go:172] (0xc00056cdc0) (1) Data frame sent\nI0526 00:58:38.679574 3382 log.go:172] (0xc000c40000) (0xc00056cdc0) Stream removed, broadcasting: 1\nI0526 00:58:38.679593 3382 log.go:172] (0xc000c40000) Go away received\nI0526 00:58:38.680133 3382 log.go:172] (0xc000c40000) (0xc00056cdc0) Stream removed, broadcasting: 1\nI0526 00:58:38.680159 3382 log.go:172] (0xc000c40000) (0xc0000f3b80) Stream removed, broadcasting: 3\nI0526 00:58:38.680174 3382 log.go:172] (0xc000c40000) (0xc00068e280) Stream removed, broadcasting: 5\n" May 26 00:58:38.685: INFO: stdout: "affinity-clusterip-timeout-9sw7v" May 26 00:58:53.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8686 execpod-affinityb2csd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.82.97:80/' May 26 00:58:53.915: INFO: stderr: "I0526 00:58:53.825056 3405 log.go:172] (0xc00003a420) (0xc0005490e0) Create stream\nI0526 00:58:53.825325 3405 log.go:172] (0xc00003a420) (0xc0005490e0) Stream added, broadcasting: 1\nI0526 00:58:53.827589 3405 log.go:172] (0xc00003a420) Reply frame received for 1\nI0526 00:58:53.827617 3405 log.go:172] (0xc00003a420) (0xc0000ddcc0) Create stream\nI0526 00:58:53.827626 3405 log.go:172] (0xc00003a420) (0xc0000ddcc0) Stream added, broadcasting: 3\nI0526 00:58:53.828549 3405 log.go:172] (0xc00003a420) Reply frame received for 3\nI0526 00:58:53.828573 3405 log.go:172] (0xc00003a420) (0xc000139180) Create stream\nI0526 00:58:53.828582 3405 log.go:172] (0xc00003a420) (0xc000139180) Stream added, broadcasting: 5\nI0526 00:58:53.829569 3405 log.go:172] (0xc00003a420) Reply frame received for 5\nI0526 00:58:53.905681 3405 log.go:172] (0xc00003a420) Data frame received for 5\nI0526 00:58:53.905714 3405 log.go:172] (0xc000139180) (5) Data frame handling\nI0526 00:58:53.905730 3405 log.go:172] (0xc000139180) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:58:53.908200 3405 log.go:172] (0xc00003a420) Data frame received for 3\nI0526 00:58:53.908218 3405 log.go:172] (0xc0000ddcc0) (3) Data frame handling\nI0526 00:58:53.908233 3405 log.go:172] (0xc0000ddcc0) (3) Data frame sent\nI0526 00:58:53.908744 3405 log.go:172] (0xc00003a420) Data frame received for 5\nI0526 00:58:53.908763 3405 log.go:172] (0xc000139180) (5) Data frame handling\nI0526 00:58:53.908795 3405 log.go:172] (0xc00003a420) Data frame received for 3\nI0526 00:58:53.908819 3405 log.go:172] (0xc0000ddcc0) (3) Data frame handling\nI0526 00:58:53.909919 3405 log.go:172] (0xc00003a420) Data frame received for 1\nI0526 00:58:53.909942 3405 log.go:172] (0xc0005490e0) (1) Data frame handling\nI0526 00:58:53.909969 3405 log.go:172] (0xc0005490e0) (1) Data frame sent\nI0526 00:58:53.909992 3405 log.go:172] (0xc00003a420) (0xc0005490e0) Stream removed, broadcasting: 1\nI0526 00:58:53.910012 3405 log.go:172] (0xc00003a420) Go away received\nI0526 00:58:53.910269 3405 log.go:172] (0xc00003a420) (0xc0005490e0) Stream removed, broadcasting: 1\nI0526 00:58:53.910285 3405 log.go:172] (0xc00003a420) (0xc0000ddcc0) Stream removed, broadcasting: 3\nI0526 00:58:53.910291 3405 log.go:172] (0xc00003a420) (0xc000139180) Stream removed, broadcasting: 5\n" May 26 00:58:53.915: INFO: stdout: "affinity-clusterip-timeout-9sw7v" May 26 00:59:08.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8686 execpod-affinityb2csd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.82.97:80/' May 26 00:59:09.173: INFO: stderr: "I0526 00:59:09.079600 3425 log.go:172] (0xc0009caf20) (0xc00084bcc0) Create stream\nI0526 00:59:09.079674 3425 log.go:172] (0xc0009caf20) (0xc00084bcc0) Stream added, broadcasting: 1\nI0526 00:59:09.085001 3425 log.go:172] (0xc0009caf20) Reply frame received for 1\nI0526 00:59:09.085058 3425 log.go:172] (0xc0009caf20) (0xc0000f7c20) Create stream\nI0526 00:59:09.085082 3425 log.go:172] (0xc0009caf20) (0xc0000f7c20) Stream added, broadcasting: 3\nI0526 00:59:09.086278 3425 log.go:172] (0xc0009caf20) Reply frame received for 3\nI0526 00:59:09.086330 3425 log.go:172] (0xc0009caf20) (0xc000704640) Create stream\nI0526 00:59:09.086345 3425 log.go:172] (0xc0009caf20) (0xc000704640) Stream added, broadcasting: 5\nI0526 00:59:09.087343 3425 log.go:172] (0xc0009caf20) Reply frame received for 5\nI0526 00:59:09.159829 3425 log.go:172] (0xc0009caf20) Data frame received for 5\nI0526 00:59:09.159859 3425 log.go:172] (0xc000704640) (5) Data frame handling\nI0526 00:59:09.159875 3425 log.go:172] (0xc000704640) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.82.97:80/\nI0526 00:59:09.164699 3425 log.go:172] (0xc0009caf20) Data frame received for 3\nI0526 00:59:09.164734 3425 log.go:172] (0xc0000f7c20) (3) Data frame handling\nI0526 00:59:09.164762 3425 log.go:172] (0xc0000f7c20) (3) Data frame sent\nI0526 00:59:09.165538 3425 log.go:172] (0xc0009caf20) Data frame received for 5\nI0526 00:59:09.165559 3425 log.go:172] (0xc000704640) (5) Data frame handling\nI0526 00:59:09.166251 3425 log.go:172] (0xc0009caf20) Data frame received for 3\nI0526 00:59:09.166266 3425 log.go:172] (0xc0000f7c20) (3) Data frame handling\nI0526 00:59:09.168368 3425 log.go:172] (0xc0009caf20) Data frame received for 1\nI0526 00:59:09.168384 3425 log.go:172] (0xc00084bcc0) (1) Data frame handling\nI0526 00:59:09.168399 3425 log.go:172] (0xc00084bcc0) (1) Data frame sent\nI0526 00:59:09.168410 3425 log.go:172] (0xc0009caf20) (0xc00084bcc0) Stream removed, broadcasting: 1\nI0526 00:59:09.168529 3425 log.go:172] (0xc0009caf20) Go away received\nI0526 00:59:09.168739 3425 log.go:172] (0xc0009caf20) (0xc00084bcc0) Stream removed, broadcasting: 1\nI0526 00:59:09.168754 3425 log.go:172] (0xc0009caf20) (0xc0000f7c20) Stream removed, broadcasting: 3\nI0526 00:59:09.168761 3425 log.go:172] (0xc0009caf20) (0xc000704640) Stream removed, broadcasting: 5\n" May 26 00:59:09.173: INFO: stdout: "affinity-clusterip-timeout-fs6ct" May 26 00:59:09.173: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-8686, will wait for the garbage collector to delete the pods May 26 00:59:09.328: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.213944ms May 26 00:59:09.929: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.400806ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:59:25.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8686" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:73.317 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":249,"skipped":4107,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:59:25.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-5de7746c-d873-4f8e-85c4-d4e918d74527 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:59:25.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5349" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":250,"skipped":4120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:59:25.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:59:29.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3322" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":251,"skipped":4158,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:59:29.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 26 00:59:29.779: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4219 /api/v1/namespaces/watch-4219/configmaps/e2e-watch-test-watch-closed 3f605421-a45f-4058-bcea-4f971f4c7a3b 7700049 0 2020-05-26 00:59:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-26 00:59:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 26 00:59:29.780: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4219 /api/v1/namespaces/watch-4219/configmaps/e2e-watch-test-watch-closed 3f605421-a45f-4058-bcea-4f971f4c7a3b 7700050 0 2020-05-26 00:59:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-26 00:59:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 26 00:59:29.800: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4219 /api/v1/namespaces/watch-4219/configmaps/e2e-watch-test-watch-closed 3f605421-a45f-4058-bcea-4f971f4c7a3b 7700051 0 2020-05-26 00:59:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-26 00:59:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 26 00:59:29.801: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4219 /api/v1/namespaces/watch-4219/configmaps/e2e-watch-test-watch-closed 3f605421-a45f-4058-bcea-4f971f4c7a3b 7700052 0 2020-05-26 00:59:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-26 00:59:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:59:29.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4219" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":252,"skipped":4163,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:59:29.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 26 00:59:29.860: INFO: Waiting up to 5m0s for pod "pod-bb077720-c999-40fe-9229-61d01b5c202b" in namespace "emptydir-4784" to be "Succeeded or Failed" May 26 00:59:29.873: INFO: Pod "pod-bb077720-c999-40fe-9229-61d01b5c202b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.209702ms May 26 00:59:31.877: INFO: Pod "pod-bb077720-c999-40fe-9229-61d01b5c202b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017469924s May 26 00:59:33.882: INFO: Pod "pod-bb077720-c999-40fe-9229-61d01b5c202b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02244288s STEP: Saw pod success May 26 00:59:33.882: INFO: Pod "pod-bb077720-c999-40fe-9229-61d01b5c202b" satisfied condition "Succeeded or Failed" May 26 00:59:33.886: INFO: Trying to get logs from node latest-worker2 pod pod-bb077720-c999-40fe-9229-61d01b5c202b container test-container: STEP: delete the pod May 26 00:59:33.999: INFO: Waiting for pod pod-bb077720-c999-40fe-9229-61d01b5c202b to disappear May 26 00:59:34.009: INFO: Pod pod-bb077720-c999-40fe-9229-61d01b5c202b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:59:34.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4784" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":253,"skipped":4184,"failed":0} SSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:59:34.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 00:59:34.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-393" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":254,"skipped":4188,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 00:59:34.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:00:05.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8370" for this suite. STEP: Destroying namespace "nsdeletetest-4484" for this suite. May 26 01:00:05.555: INFO: Namespace nsdeletetest-4484 was already deleted STEP: Destroying namespace "nsdeletetest-6257" for this suite. • [SLOW TEST:31.420 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":255,"skipped":4197,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:00:05.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-b86ec22a-b3f6-4272-a61d-7dbb5db3a611 in namespace container-probe-2939 May 26 01:00:09.746: INFO: Started pod liveness-b86ec22a-b3f6-4272-a61d-7dbb5db3a611 in namespace container-probe-2939 STEP: checking the pod's current state and verifying that restartCount is present May 26 01:00:09.749: INFO: Initial restart count of pod liveness-b86ec22a-b3f6-4272-a61d-7dbb5db3a611 is 0 May 26 01:00:31.800: INFO: Restart count of pod container-probe-2939/liveness-b86ec22a-b3f6-4272-a61d-7dbb5db3a611 is now 1 (22.051605573s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:00:31.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2939" for this suite. • [SLOW TEST:26.295 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":256,"skipped":4210,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:00:31.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 26 01:00:32.022: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:00:39.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6670" for this suite. • [SLOW TEST:7.764 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":257,"skipped":4220,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:00:39.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 01:00:39.693: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 26 01:00:42.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2468 create -f -' May 26 01:00:47.772: INFO: stderr: "" May 26 01:00:47.772: INFO: stdout: "e2e-test-crd-publish-openapi-8255-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 26 01:00:47.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2468 delete e2e-test-crd-publish-openapi-8255-crds test-foo' May 26 01:00:47.880: INFO: stderr: "" May 26 01:00:47.880: INFO: stdout: "e2e-test-crd-publish-openapi-8255-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 26 01:00:47.880: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2468 apply -f -' May 26 01:00:50.517: INFO: stderr: "" May 26 01:00:50.517: INFO: stdout: "e2e-test-crd-publish-openapi-8255-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 26 01:00:50.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2468 delete e2e-test-crd-publish-openapi-8255-crds test-foo' May 26 01:00:50.629: INFO: stderr: "" May 26 01:00:50.629: INFO: stdout: "e2e-test-crd-publish-openapi-8255-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 26 01:00:50.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2468 create -f -' May 26 01:00:52.588: INFO: rc: 1 May 26 01:00:52.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2468 apply -f -' May 26 01:00:55.236: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 26 01:00:55.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2468 create -f -' May 26 01:00:55.493: INFO: rc: 1 May 26 01:00:55.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2468 apply -f -' May 26 01:00:55.731: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 26 01:00:55.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8255-crds' May 26 01:00:55.993: INFO: stderr: "" May 26 01:00:55.993: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8255-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 26 01:00:55.994: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8255-crds.metadata' May 26 01:00:56.297: INFO: stderr: "" May 26 01:00:56.297: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8255-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 26 01:00:56.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8255-crds.spec' May 26 01:00:56.538: INFO: stderr: "" May 26 01:00:56.538: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8255-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 26 01:00:56.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8255-crds.spec.bars' May 26 01:00:56.794: INFO: stderr: "" May 26 01:00:56.794: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8255-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 26 01:00:56.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8255-crds.spec.bars2' May 26 01:00:57.071: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:00:58.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2468" for this suite. • [SLOW TEST:19.353 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":258,"skipped":4220,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:00:58.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7942 STEP: creating a selector STEP: Creating the service pods in kubernetes May 26 01:00:59.105: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 26 01:00:59.202: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 26 01:01:01.207: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 26 01:01:03.207: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 01:01:05.207: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 01:01:07.207: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 01:01:09.206: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 01:01:11.209: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 01:01:13.207: INFO: The status of Pod netserver-0 is Running (Ready = true) May 26 01:01:13.214: INFO: The status of Pod netserver-1 is Running (Ready = false) May 26 01:01:15.219: INFO: The status of Pod netserver-1 is Running (Ready = false) May 26 01:01:17.219: INFO: The status of Pod netserver-1 is Running (Ready = false) May 26 01:01:19.220: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 26 01:01:25.268: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.248:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7942 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 01:01:25.268: INFO: >>> kubeConfig: /root/.kube/config I0526 01:01:25.308686 7 log.go:172] (0xc000818420) (0xc001baf5e0) Create stream I0526 01:01:25.308734 7 log.go:172] (0xc000818420) (0xc001baf5e0) Stream added, broadcasting: 1 I0526 01:01:25.310964 7 log.go:172] (0xc000818420) Reply frame received for 1 I0526 01:01:25.311004 7 log.go:172] (0xc000818420) (0xc001baf860) Create stream I0526 01:01:25.311019 7 log.go:172] (0xc000818420) (0xc001baf860) Stream added, broadcasting: 3 I0526 01:01:25.312129 7 log.go:172] (0xc000818420) Reply frame received for 3 I0526 01:01:25.312161 7 log.go:172] (0xc000818420) (0xc000f7d180) Create stream I0526 01:01:25.312176 7 log.go:172] (0xc000818420) (0xc000f7d180) Stream added, broadcasting: 5 I0526 01:01:25.313065 7 log.go:172] (0xc000818420) Reply frame received for 5 I0526 01:01:25.385969 7 log.go:172] (0xc000818420) Data frame received for 3 I0526 01:01:25.385995 7 log.go:172] (0xc001baf860) (3) Data frame handling I0526 01:01:25.386011 7 log.go:172] (0xc001baf860) (3) Data frame sent I0526 01:01:25.386205 7 log.go:172] (0xc000818420) Data frame received for 3 I0526 01:01:25.386231 7 log.go:172] (0xc001baf860) (3) Data frame handling I0526 01:01:25.386419 7 log.go:172] (0xc000818420) Data frame received for 5 I0526 01:01:25.386455 7 log.go:172] (0xc000f7d180) (5) Data frame handling I0526 01:01:25.388252 7 log.go:172] (0xc000818420) Data frame received for 1 I0526 01:01:25.388275 7 log.go:172] (0xc001baf5e0) (1) Data frame handling I0526 01:01:25.388294 7 log.go:172] (0xc001baf5e0) (1) Data frame sent I0526 01:01:25.388314 7 log.go:172] (0xc000818420) (0xc001baf5e0) Stream removed, broadcasting: 1 I0526 01:01:25.388343 7 log.go:172] (0xc000818420) Go away received I0526 01:01:25.388443 7 log.go:172] (0xc000818420) (0xc001baf5e0) Stream removed, broadcasting: 1 I0526 01:01:25.388458 7 log.go:172] (0xc000818420) (0xc001baf860) Stream removed, broadcasting: 3 I0526 01:01:25.388465 7 log.go:172] (0xc000818420) (0xc000f7d180) Stream removed, broadcasting: 5 May 26 01:01:25.388: INFO: Found all expected endpoints: [netserver-0] May 26 01:01:25.392: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.245:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7942 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 01:01:25.392: INFO: >>> kubeConfig: /root/.kube/config I0526 01:01:25.424720 7 log.go:172] (0xc0016802c0) (0xc000f7d860) Create stream I0526 01:01:25.424748 7 log.go:172] (0xc0016802c0) (0xc000f7d860) Stream added, broadcasting: 1 I0526 01:01:25.426549 7 log.go:172] (0xc0016802c0) Reply frame received for 1 I0526 01:01:25.426597 7 log.go:172] (0xc0016802c0) (0xc001bafa40) Create stream I0526 01:01:25.426617 7 log.go:172] (0xc0016802c0) (0xc001bafa40) Stream added, broadcasting: 3 I0526 01:01:25.427462 7 log.go:172] (0xc0016802c0) Reply frame received for 3 I0526 01:01:25.427490 7 log.go:172] (0xc0016802c0) (0xc000d7b2c0) Create stream I0526 01:01:25.427506 7 log.go:172] (0xc0016802c0) (0xc000d7b2c0) Stream added, broadcasting: 5 I0526 01:01:25.428312 7 log.go:172] (0xc0016802c0) Reply frame received for 5 I0526 01:01:25.500467 7 log.go:172] (0xc0016802c0) Data frame received for 3 I0526 01:01:25.500517 7 log.go:172] (0xc001bafa40) (3) Data frame handling I0526 01:01:25.500543 7 log.go:172] (0xc0016802c0) Data frame received for 5 I0526 01:01:25.500573 7 log.go:172] (0xc000d7b2c0) (5) Data frame handling I0526 01:01:25.500596 7 log.go:172] (0xc001bafa40) (3) Data frame sent I0526 01:01:25.500620 7 log.go:172] (0xc0016802c0) Data frame received for 3 I0526 01:01:25.500628 7 log.go:172] (0xc001bafa40) (3) Data frame handling I0526 01:01:25.502117 7 log.go:172] (0xc0016802c0) Data frame received for 1 I0526 01:01:25.502134 7 log.go:172] (0xc000f7d860) (1) Data frame handling I0526 01:01:25.502143 7 log.go:172] (0xc000f7d860) (1) Data frame sent I0526 01:01:25.502152 7 log.go:172] (0xc0016802c0) (0xc000f7d860) Stream removed, broadcasting: 1 I0526 01:01:25.502215 7 log.go:172] (0xc0016802c0) (0xc000f7d860) Stream removed, broadcasting: 1 I0526 01:01:25.502225 7 log.go:172] (0xc0016802c0) (0xc001bafa40) Stream removed, broadcasting: 3 I0526 01:01:25.502234 7 log.go:172] (0xc0016802c0) (0xc000d7b2c0) Stream removed, broadcasting: 5 May 26 01:01:25.502: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:01:25.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0526 01:01:25.502309 7 log.go:172] (0xc0016802c0) Go away received STEP: Destroying namespace "pod-network-test-7942" for this suite. • [SLOW TEST:26.512 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":259,"skipped":4235,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:01:25.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-54e6d85b-b3b0-43a1-9060-fe464e1acee4 STEP: Creating a pod to test consume configMaps May 26 01:01:25.767: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2bb491fc-06e1-4a55-bb70-4c82c1be6b94" in namespace "projected-6225" to be "Succeeded or Failed" May 26 01:01:25.779: INFO: Pod "pod-projected-configmaps-2bb491fc-06e1-4a55-bb70-4c82c1be6b94": Phase="Pending", Reason="", readiness=false. Elapsed: 11.992089ms May 26 01:01:27.784: INFO: Pod "pod-projected-configmaps-2bb491fc-06e1-4a55-bb70-4c82c1be6b94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01669459s May 26 01:01:29.789: INFO: Pod "pod-projected-configmaps-2bb491fc-06e1-4a55-bb70-4c82c1be6b94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021238061s STEP: Saw pod success May 26 01:01:29.789: INFO: Pod "pod-projected-configmaps-2bb491fc-06e1-4a55-bb70-4c82c1be6b94" satisfied condition "Succeeded or Failed" May 26 01:01:29.792: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2bb491fc-06e1-4a55-bb70-4c82c1be6b94 container projected-configmap-volume-test: STEP: delete the pod May 26 01:01:29.862: INFO: Waiting for pod pod-projected-configmaps-2bb491fc-06e1-4a55-bb70-4c82c1be6b94 to disappear May 26 01:01:29.875: INFO: Pod pod-projected-configmaps-2bb491fc-06e1-4a55-bb70-4c82c1be6b94 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:01:29.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6225" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":260,"skipped":4240,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:01:29.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 26 01:01:30.181: INFO: Waiting up to 5m0s for pod "client-containers-163b7c05-fbdb-4113-a2a7-d5eb2ee142a5" in namespace "containers-9515" to be "Succeeded or Failed" May 26 01:01:30.194: INFO: Pod "client-containers-163b7c05-fbdb-4113-a2a7-d5eb2ee142a5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.092769ms May 26 01:01:32.406: INFO: Pod "client-containers-163b7c05-fbdb-4113-a2a7-d5eb2ee142a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224696362s May 26 01:01:34.411: INFO: Pod "client-containers-163b7c05-fbdb-4113-a2a7-d5eb2ee142a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.229755261s STEP: Saw pod success May 26 01:01:34.411: INFO: Pod "client-containers-163b7c05-fbdb-4113-a2a7-d5eb2ee142a5" satisfied condition "Succeeded or Failed" May 26 01:01:34.414: INFO: Trying to get logs from node latest-worker2 pod client-containers-163b7c05-fbdb-4113-a2a7-d5eb2ee142a5 container test-container: STEP: delete the pod May 26 01:01:34.461: INFO: Waiting for pod client-containers-163b7c05-fbdb-4113-a2a7-d5eb2ee142a5 to disappear May 26 01:01:34.478: INFO: Pod client-containers-163b7c05-fbdb-4113-a2a7-d5eb2ee142a5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:01:34.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9515" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":261,"skipped":4242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:01:34.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 01:01:34.750: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f939e742-ca3a-4a0a-ae64-f3cda564222e" in namespace "downward-api-7569" to be "Succeeded or Failed" May 26 01:01:34.826: INFO: Pod "downwardapi-volume-f939e742-ca3a-4a0a-ae64-f3cda564222e": Phase="Pending", Reason="", readiness=false. Elapsed: 76.109051ms May 26 01:01:36.831: INFO: Pod "downwardapi-volume-f939e742-ca3a-4a0a-ae64-f3cda564222e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080975564s May 26 01:01:38.836: INFO: Pod "downwardapi-volume-f939e742-ca3a-4a0a-ae64-f3cda564222e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085764718s STEP: Saw pod success May 26 01:01:38.836: INFO: Pod "downwardapi-volume-f939e742-ca3a-4a0a-ae64-f3cda564222e" satisfied condition "Succeeded or Failed" May 26 01:01:38.839: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f939e742-ca3a-4a0a-ae64-f3cda564222e container client-container: STEP: delete the pod May 26 01:01:38.859: INFO: Waiting for pod downwardapi-volume-f939e742-ca3a-4a0a-ae64-f3cda564222e to disappear May 26 01:01:38.863: INFO: Pod downwardapi-volume-f939e742-ca3a-4a0a-ae64-f3cda564222e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:01:38.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7569" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":262,"skipped":4267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:01:38.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 26 01:01:38.986: INFO: Waiting up to 5m0s for pod "pod-da5bfc25-b141-4604-a294-3ff323d99cac" in namespace "emptydir-5952" to be "Succeeded or Failed" May 26 01:01:38.989: INFO: Pod "pod-da5bfc25-b141-4604-a294-3ff323d99cac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.129406ms May 26 01:01:41.010: INFO: Pod "pod-da5bfc25-b141-4604-a294-3ff323d99cac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024001415s May 26 01:01:43.014: INFO: Pod "pod-da5bfc25-b141-4604-a294-3ff323d99cac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028295287s STEP: Saw pod success May 26 01:01:43.014: INFO: Pod "pod-da5bfc25-b141-4604-a294-3ff323d99cac" satisfied condition "Succeeded or Failed" May 26 01:01:43.018: INFO: Trying to get logs from node latest-worker pod pod-da5bfc25-b141-4604-a294-3ff323d99cac container test-container: STEP: delete the pod May 26 01:01:43.057: INFO: Waiting for pod pod-da5bfc25-b141-4604-a294-3ff323d99cac to disappear May 26 01:01:43.067: INFO: Pod pod-da5bfc25-b141-4604-a294-3ff323d99cac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:01:43.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5952" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":263,"skipped":4292,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:01:43.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:01:56.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6450" for this suite. • [SLOW TEST:13.337 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":264,"skipped":4295,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:01:56.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-bea81e8c-bd54-49ba-9f5f-68f240aa6560 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-bea81e8c-bd54-49ba-9f5f-68f240aa6560 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:02:02.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9742" for this suite. • [SLOW TEST:6.278 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":265,"skipped":4310,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:02:02.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:02:02.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3652" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":266,"skipped":4346,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:02:02.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:02:03.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1886" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":267,"skipped":4356,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:02:03.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 01:02:03.300: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3ef0b5d-bf6d-4076-9cf8-124c18c5829c" in namespace "projected-6497" to be "Succeeded or Failed" May 26 01:02:03.326: INFO: Pod "downwardapi-volume-a3ef0b5d-bf6d-4076-9cf8-124c18c5829c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.635302ms May 26 01:02:05.331: INFO: Pod "downwardapi-volume-a3ef0b5d-bf6d-4076-9cf8-124c18c5829c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030225347s May 26 01:02:07.336: INFO: Pod "downwardapi-volume-a3ef0b5d-bf6d-4076-9cf8-124c18c5829c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035570194s STEP: Saw pod success May 26 01:02:07.336: INFO: Pod "downwardapi-volume-a3ef0b5d-bf6d-4076-9cf8-124c18c5829c" satisfied condition "Succeeded or Failed" May 26 01:02:07.339: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a3ef0b5d-bf6d-4076-9cf8-124c18c5829c container client-container: STEP: delete the pod May 26 01:02:07.394: INFO: Waiting for pod downwardapi-volume-a3ef0b5d-bf6d-4076-9cf8-124c18c5829c to disappear May 26 01:02:07.401: INFO: Pod downwardapi-volume-a3ef0b5d-bf6d-4076-9cf8-124c18c5829c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:02:07.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6497" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":268,"skipped":4364,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:02:07.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 01:02:08.000: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 01:02:10.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051728, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051728, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051728, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051727, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 01:02:12.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051728, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051728, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051728, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051727, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 01:02:15.214: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 26 01:02:15.271: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:02:15.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6257" for this suite. STEP: Destroying namespace "webhook-6257-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.034 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":269,"skipped":4366,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:02:15.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 01:02:16.675: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 01:02:18.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051736, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051736, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051736, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051736, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 01:02:20.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051736, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051736, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051736, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726051736, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 01:02:23.946: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:02:24.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8755" for this suite. STEP: Destroying namespace "webhook-8755-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.231 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":270,"skipped":4370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:02:24.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:02:28.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2817" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":271,"skipped":4404,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:02:28.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9506 STEP: creating a selector STEP: Creating the service pods in kubernetes May 26 01:02:29.072: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 26 01:02:29.196: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 26 01:02:31.371: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 26 01:02:33.201: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 01:02:35.201: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 01:02:37.200: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 01:02:39.200: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 01:02:41.200: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 01:02:43.200: INFO: The status of Pod netserver-0 is Running (Ready = false) May 26 01:02:45.200: INFO: The status of Pod netserver-0 is Running (Ready = true) May 26 01:02:45.205: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 26 01:02:51.271: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.252 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9506 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 01:02:51.271: INFO: >>> kubeConfig: /root/.kube/config I0526 01:02:51.317452 7 log.go:172] (0xc00163ee70) (0xc001809360) Create stream I0526 01:02:51.317561 7 log.go:172] (0xc00163ee70) (0xc001809360) Stream added, broadcasting: 1 I0526 01:02:51.320070 7 log.go:172] (0xc00163ee70) Reply frame received for 1 I0526 01:02:51.320132 7 log.go:172] (0xc00163ee70) (0xc002b57540) Create stream I0526 01:02:51.320161 7 log.go:172] (0xc00163ee70) (0xc002b57540) Stream added, broadcasting: 3 I0526 01:02:51.321385 7 log.go:172] (0xc00163ee70) Reply frame received for 3 I0526 01:02:51.321437 7 log.go:172] (0xc00163ee70) (0xc0020a90e0) Create stream I0526 01:02:51.321466 7 log.go:172] (0xc00163ee70) (0xc0020a90e0) Stream added, broadcasting: 5 I0526 01:02:51.322600 7 log.go:172] (0xc00163ee70) Reply frame received for 5 I0526 01:02:52.397995 7 log.go:172] (0xc00163ee70) Data frame received for 3 I0526 01:02:52.398046 7 log.go:172] (0xc002b57540) (3) Data frame handling I0526 01:02:52.398070 7 log.go:172] (0xc002b57540) (3) Data frame sent I0526 01:02:52.398098 7 log.go:172] (0xc00163ee70) Data frame received for 3 I0526 01:02:52.398114 7 log.go:172] (0xc002b57540) (3) Data frame handling I0526 01:02:52.398257 7 log.go:172] (0xc00163ee70) Data frame received for 5 I0526 01:02:52.398287 7 log.go:172] (0xc0020a90e0) (5) Data frame handling I0526 01:02:52.400593 7 log.go:172] (0xc00163ee70) Data frame received for 1 I0526 01:02:52.400636 7 log.go:172] (0xc001809360) (1) Data frame handling I0526 01:02:52.400657 7 log.go:172] (0xc001809360) (1) Data frame sent I0526 01:02:52.400688 7 log.go:172] (0xc00163ee70) (0xc001809360) Stream removed, broadcasting: 1 I0526 01:02:52.400734 7 log.go:172] (0xc00163ee70) Go away received I0526 01:02:52.400803 7 log.go:172] (0xc00163ee70) (0xc001809360) Stream removed, broadcasting: 1 I0526 01:02:52.400831 7 log.go:172] (0xc00163ee70) (0xc002b57540) Stream removed, broadcasting: 3 I0526 01:02:52.400844 7 log.go:172] (0xc00163ee70) (0xc0020a90e0) Stream removed, broadcasting: 5 May 26 01:02:52.400: INFO: Found all expected endpoints: [netserver-0] May 26 01:02:52.404: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.253 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9506 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 01:02:52.405: INFO: >>> kubeConfig: /root/.kube/config I0526 01:02:52.434343 7 log.go:172] (0xc0016818c0) (0xc0020a99a0) Create stream I0526 01:02:52.434373 7 log.go:172] (0xc0016818c0) (0xc0020a99a0) Stream added, broadcasting: 1 I0526 01:02:52.436325 7 log.go:172] (0xc0016818c0) Reply frame received for 1 I0526 01:02:52.436370 7 log.go:172] (0xc0016818c0) (0xc0020a9ae0) Create stream I0526 01:02:52.436390 7 log.go:172] (0xc0016818c0) (0xc0020a9ae0) Stream added, broadcasting: 3 I0526 01:02:52.437712 7 log.go:172] (0xc0016818c0) Reply frame received for 3 I0526 01:02:52.437756 7 log.go:172] (0xc0016818c0) (0xc002b575e0) Create stream I0526 01:02:52.437864 7 log.go:172] (0xc0016818c0) (0xc002b575e0) Stream added, broadcasting: 5 I0526 01:02:52.438918 7 log.go:172] (0xc0016818c0) Reply frame received for 5 I0526 01:02:53.516601 7 log.go:172] (0xc0016818c0) Data frame received for 3 I0526 01:02:53.516637 7 log.go:172] (0xc0020a9ae0) (3) Data frame handling I0526 01:02:53.516660 7 log.go:172] (0xc0020a9ae0) (3) Data frame sent I0526 01:02:53.517077 7 log.go:172] (0xc0016818c0) Data frame received for 3 I0526 01:02:53.517326 7 log.go:172] (0xc0020a9ae0) (3) Data frame handling I0526 01:02:53.517431 7 log.go:172] (0xc0016818c0) Data frame received for 5 I0526 01:02:53.517500 7 log.go:172] (0xc002b575e0) (5) Data frame handling I0526 01:02:53.519276 7 log.go:172] (0xc0016818c0) Data frame received for 1 I0526 01:02:53.519293 7 log.go:172] (0xc0020a99a0) (1) Data frame handling I0526 01:02:53.519307 7 log.go:172] (0xc0020a99a0) (1) Data frame sent I0526 01:02:53.519502 7 log.go:172] (0xc0016818c0) (0xc0020a99a0) Stream removed, broadcasting: 1 I0526 01:02:53.519545 7 log.go:172] (0xc0016818c0) Go away received I0526 01:02:53.519635 7 log.go:172] (0xc0016818c0) (0xc0020a99a0) Stream removed, broadcasting: 1 I0526 01:02:53.519661 7 log.go:172] (0xc0016818c0) (0xc0020a9ae0) Stream removed, broadcasting: 3 I0526 01:02:53.519677 7 log.go:172] (0xc0016818c0) (0xc002b575e0) Stream removed, broadcasting: 5 May 26 01:02:53.519: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:02:53.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9506" for this suite. • [SLOW TEST:24.564 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":272,"skipped":4423,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:02:53.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-76edf5c4-b39b-46a7-9d54-ff26eb6c6303 STEP: Creating a pod to test consume configMaps May 26 01:02:53.691: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3c97bda5-bacb-4967-a77d-136f5ec633a5" in namespace "projected-6390" to be "Succeeded or Failed" May 26 01:02:53.713: INFO: Pod "pod-projected-configmaps-3c97bda5-bacb-4967-a77d-136f5ec633a5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.267084ms May 26 01:02:55.741: INFO: Pod "pod-projected-configmaps-3c97bda5-bacb-4967-a77d-136f5ec633a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050336233s May 26 01:02:57.744: INFO: Pod "pod-projected-configmaps-3c97bda5-bacb-4967-a77d-136f5ec633a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053237033s STEP: Saw pod success May 26 01:02:57.744: INFO: Pod "pod-projected-configmaps-3c97bda5-bacb-4967-a77d-136f5ec633a5" satisfied condition "Succeeded or Failed" May 26 01:02:57.746: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-3c97bda5-bacb-4967-a77d-136f5ec633a5 container projected-configmap-volume-test: STEP: delete the pod May 26 01:02:58.037: INFO: Waiting for pod pod-projected-configmaps-3c97bda5-bacb-4967-a77d-136f5ec633a5 to disappear May 26 01:02:58.085: INFO: Pod pod-projected-configmaps-3c97bda5-bacb-4967-a77d-136f5ec633a5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:02:58.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6390" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":273,"skipped":4426,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:02:58.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 26 01:02:58.327: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:03:13.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4401" for this suite. • [SLOW TEST:15.758 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":274,"skipped":4429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:03:13.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 01:03:14.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60fd7a6c-3a92-43bd-a3c9-2c23620932e9" in namespace "projected-4880" to be "Succeeded or Failed" May 26 01:03:14.055: INFO: Pod "downwardapi-volume-60fd7a6c-3a92-43bd-a3c9-2c23620932e9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.386523ms May 26 01:03:16.059: INFO: Pod "downwardapi-volume-60fd7a6c-3a92-43bd-a3c9-2c23620932e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00965768s May 26 01:03:18.064: INFO: Pod "downwardapi-volume-60fd7a6c-3a92-43bd-a3c9-2c23620932e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014416028s STEP: Saw pod success May 26 01:03:18.064: INFO: Pod "downwardapi-volume-60fd7a6c-3a92-43bd-a3c9-2c23620932e9" satisfied condition "Succeeded or Failed" May 26 01:03:18.068: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-60fd7a6c-3a92-43bd-a3c9-2c23620932e9 container client-container: STEP: delete the pod May 26 01:03:18.114: INFO: Waiting for pod downwardapi-volume-60fd7a6c-3a92-43bd-a3c9-2c23620932e9 to disappear May 26 01:03:18.181: INFO: Pod downwardapi-volume-60fd7a6c-3a92-43bd-a3c9-2c23620932e9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:03:18.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4880" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":275,"skipped":4453,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:03:18.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 01:03:18.380: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df3a9112-2de9-438d-9d96-782fe279ee0a" in namespace "projected-3667" to be "Succeeded or Failed" May 26 01:03:18.397: INFO: Pod "downwardapi-volume-df3a9112-2de9-438d-9d96-782fe279ee0a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.859942ms May 26 01:03:20.402: INFO: Pod "downwardapi-volume-df3a9112-2de9-438d-9d96-782fe279ee0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021232104s May 26 01:03:22.527: INFO: Pod "downwardapi-volume-df3a9112-2de9-438d-9d96-782fe279ee0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146328436s STEP: Saw pod success May 26 01:03:22.527: INFO: Pod "downwardapi-volume-df3a9112-2de9-438d-9d96-782fe279ee0a" satisfied condition "Succeeded or Failed" May 26 01:03:22.530: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-df3a9112-2de9-438d-9d96-782fe279ee0a container client-container: STEP: delete the pod May 26 01:03:22.613: INFO: Waiting for pod downwardapi-volume-df3a9112-2de9-438d-9d96-782fe279ee0a to disappear May 26 01:03:22.663: INFO: Pod downwardapi-volume-df3a9112-2de9-438d-9d96-782fe279ee0a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:03:22.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3667" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":276,"skipped":4498,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:03:22.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 26 01:03:22.801: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5830 /api/v1/namespaces/watch-5830/configmaps/e2e-watch-test-configmap-a 104ed588-6888-43b1-9eaf-7e25ea943a7b 7701532 0 2020-05-26 01:03:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-26 01:03:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 26 01:03:22.802: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5830 /api/v1/namespaces/watch-5830/configmaps/e2e-watch-test-configmap-a 104ed588-6888-43b1-9eaf-7e25ea943a7b 7701532 0 2020-05-26 01:03:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-26 01:03:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 26 01:03:32.810: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5830 /api/v1/namespaces/watch-5830/configmaps/e2e-watch-test-configmap-a 104ed588-6888-43b1-9eaf-7e25ea943a7b 7701580 0 2020-05-26 01:03:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-26 01:03:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 26 01:03:32.810: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5830 /api/v1/namespaces/watch-5830/configmaps/e2e-watch-test-configmap-a 104ed588-6888-43b1-9eaf-7e25ea943a7b 7701580 0 2020-05-26 01:03:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-26 01:03:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 26 01:03:42.819: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5830 /api/v1/namespaces/watch-5830/configmaps/e2e-watch-test-configmap-a 104ed588-6888-43b1-9eaf-7e25ea943a7b 7701610 0 2020-05-26 01:03:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-26 01:03:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 26 01:03:42.819: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5830 /api/v1/namespaces/watch-5830/configmaps/e2e-watch-test-configmap-a 104ed588-6888-43b1-9eaf-7e25ea943a7b 7701610 0 2020-05-26 01:03:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-26 01:03:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 26 01:03:52.827: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5830 /api/v1/namespaces/watch-5830/configmaps/e2e-watch-test-configmap-a 104ed588-6888-43b1-9eaf-7e25ea943a7b 7701640 0 2020-05-26 01:03:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-26 01:03:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 26 01:03:52.827: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5830 /api/v1/namespaces/watch-5830/configmaps/e2e-watch-test-configmap-a 104ed588-6888-43b1-9eaf-7e25ea943a7b 7701640 0 2020-05-26 01:03:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-26 01:03:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 26 01:04:02.837: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5830 /api/v1/namespaces/watch-5830/configmaps/e2e-watch-test-configmap-b eaca7153-8542-4dff-a5de-ce6f7e90c205 7701671 0 2020-05-26 01:04:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-26 01:04:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 26 01:04:02.837: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5830 /api/v1/namespaces/watch-5830/configmaps/e2e-watch-test-configmap-b eaca7153-8542-4dff-a5de-ce6f7e90c205 7701671 0 2020-05-26 01:04:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-26 01:04:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 26 01:04:12.844: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5830 /api/v1/namespaces/watch-5830/configmaps/e2e-watch-test-configmap-b eaca7153-8542-4dff-a5de-ce6f7e90c205 7701701 0 2020-05-26 01:04:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-26 01:04:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 26 01:04:12.844: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5830 /api/v1/namespaces/watch-5830/configmaps/e2e-watch-test-configmap-b eaca7153-8542-4dff-a5de-ce6f7e90c205 7701701 0 2020-05-26 01:04:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-26 01:04:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:04:22.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5830" for this suite. • [SLOW TEST:60.177 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":277,"skipped":4509,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:04:22.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-f8810043-7ea5-48d5-be6f-de3500944a1a STEP: Creating a pod to test consume secrets May 26 01:04:22.994: INFO: Waiting up to 5m0s for pod "pod-secrets-5d4bffdb-9a4d-49c9-9794-58c2160c1379" in namespace "secrets-7622" to be "Succeeded or Failed" May 26 01:04:22.997: INFO: Pod "pod-secrets-5d4bffdb-9a4d-49c9-9794-58c2160c1379": Phase="Pending", Reason="", readiness=false. Elapsed: 3.08216ms May 26 01:04:25.001: INFO: Pod "pod-secrets-5d4bffdb-9a4d-49c9-9794-58c2160c1379": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007463525s May 26 01:04:27.006: INFO: Pod "pod-secrets-5d4bffdb-9a4d-49c9-9794-58c2160c1379": Phase="Running", Reason="", readiness=true. Elapsed: 4.011934384s May 26 01:04:29.010: INFO: Pod "pod-secrets-5d4bffdb-9a4d-49c9-9794-58c2160c1379": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016291894s STEP: Saw pod success May 26 01:04:29.010: INFO: Pod "pod-secrets-5d4bffdb-9a4d-49c9-9794-58c2160c1379" satisfied condition "Succeeded or Failed" May 26 01:04:29.013: INFO: Trying to get logs from node latest-worker pod pod-secrets-5d4bffdb-9a4d-49c9-9794-58c2160c1379 container secret-volume-test: STEP: delete the pod May 26 01:04:29.034: INFO: Waiting for pod pod-secrets-5d4bffdb-9a4d-49c9-9794-58c2160c1379 to disappear May 26 01:04:29.039: INFO: Pod pod-secrets-5d4bffdb-9a4d-49c9-9794-58c2160c1379 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:04:29.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7622" for this suite. • [SLOW TEST:6.191 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":278,"skipped":4518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:04:29.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-a233f275-11d8-49b9-9b79-08100df69de3 STEP: Creating a pod to test consume configMaps May 26 01:04:29.188: INFO: Waiting up to 5m0s for pod "pod-configmaps-5867c8f9-f8fa-49be-8972-94661e9b448b" in namespace "configmap-3419" to be "Succeeded or Failed" May 26 01:04:29.191: INFO: Pod "pod-configmaps-5867c8f9-f8fa-49be-8972-94661e9b448b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.062004ms May 26 01:04:31.195: INFO: Pod "pod-configmaps-5867c8f9-f8fa-49be-8972-94661e9b448b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006957677s May 26 01:04:33.199: INFO: Pod "pod-configmaps-5867c8f9-f8fa-49be-8972-94661e9b448b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011594708s STEP: Saw pod success May 26 01:04:33.199: INFO: Pod "pod-configmaps-5867c8f9-f8fa-49be-8972-94661e9b448b" satisfied condition "Succeeded or Failed" May 26 01:04:33.203: INFO: Trying to get logs from node latest-worker pod pod-configmaps-5867c8f9-f8fa-49be-8972-94661e9b448b container configmap-volume-test: STEP: delete the pod May 26 01:04:33.247: INFO: Waiting for pod pod-configmaps-5867c8f9-f8fa-49be-8972-94661e9b448b to disappear May 26 01:04:33.257: INFO: Pod pod-configmaps-5867c8f9-f8fa-49be-8972-94661e9b448b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:04:33.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3419" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":279,"skipped":4632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:04:33.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 26 01:04:37.442: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:04:37.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7031" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":280,"skipped":4675,"failed":0} SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:04:37.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 26 01:04:42.130: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0ee0fff3-ed2d-456f-83f2-283718e08fe1" May 26 01:04:42.130: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0ee0fff3-ed2d-456f-83f2-283718e08fe1" in namespace "pods-2879" to be "terminated due to deadline exceeded" May 26 01:04:42.159: INFO: Pod "pod-update-activedeadlineseconds-0ee0fff3-ed2d-456f-83f2-283718e08fe1": Phase="Running", Reason="", readiness=true. Elapsed: 29.014941ms May 26 01:04:44.185: INFO: Pod "pod-update-activedeadlineseconds-0ee0fff3-ed2d-456f-83f2-283718e08fe1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.055730683s May 26 01:04:44.185: INFO: Pod "pod-update-activedeadlineseconds-0ee0fff3-ed2d-456f-83f2-283718e08fe1" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:04:44.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2879" for this suite. • [SLOW TEST:6.704 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":281,"skipped":4677,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:04:44.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1128 STEP: creating service affinity-nodeport-transition in namespace services-1128 STEP: creating replication controller affinity-nodeport-transition in namespace services-1128 I0526 01:04:44.349025 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-1128, replica count: 3 I0526 01:04:47.399490 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 01:04:50.399759 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 01:04:50.412: INFO: Creating new exec pod May 26 01:04:55.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1128 execpod-affinity5mx2h -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 26 01:04:55.708: INFO: stderr: "I0526 01:04:55.640675 3742 log.go:172] (0xc0009af3f0) (0xc000845f40) Create stream\nI0526 01:04:55.640746 3742 log.go:172] (0xc0009af3f0) (0xc000845f40) Stream added, broadcasting: 1\nI0526 01:04:55.644717 3742 log.go:172] (0xc0009af3f0) Reply frame received for 1\nI0526 01:04:55.644753 3742 log.go:172] (0xc0009af3f0) (0xc000838aa0) Create stream\nI0526 01:04:55.644765 3742 log.go:172] (0xc0009af3f0) (0xc000838aa0) Stream added, broadcasting: 3\nI0526 01:04:55.645914 3742 log.go:172] (0xc0009af3f0) Reply frame received for 3\nI0526 01:04:55.645948 3742 log.go:172] (0xc0009af3f0) (0xc00082a5a0) Create stream\nI0526 01:04:55.645959 3742 log.go:172] (0xc0009af3f0) (0xc00082a5a0) Stream added, broadcasting: 5\nI0526 01:04:55.646754 3742 log.go:172] (0xc0009af3f0) Reply frame received for 5\nI0526 01:04:55.698045 3742 log.go:172] (0xc0009af3f0) Data frame received for 5\nI0526 01:04:55.698081 3742 log.go:172] (0xc00082a5a0) (5) Data frame handling\nI0526 01:04:55.698106 3742 log.go:172] (0xc00082a5a0) (5) Data frame sent\nI0526 01:04:55.698120 3742 log.go:172] (0xc0009af3f0) Data frame received for 5\nI0526 01:04:55.698136 3742 log.go:172] (0xc00082a5a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0526 01:04:55.698574 3742 log.go:172] (0xc0009af3f0) Data frame received for 3\nI0526 01:04:55.698601 3742 log.go:172] (0xc000838aa0) (3) Data frame handling\nI0526 01:04:55.700707 3742 log.go:172] (0xc0009af3f0) Data frame received for 1\nI0526 01:04:55.700737 3742 log.go:172] (0xc000845f40) (1) Data frame handling\nI0526 01:04:55.700768 3742 log.go:172] (0xc000845f40) (1) Data frame sent\nI0526 01:04:55.700793 3742 log.go:172] (0xc0009af3f0) (0xc000845f40) Stream removed, broadcasting: 1\nI0526 01:04:55.700817 3742 log.go:172] (0xc0009af3f0) Go away received\nI0526 01:04:55.701303 3742 log.go:172] (0xc0009af3f0) (0xc000845f40) Stream removed, broadcasting: 1\nI0526 01:04:55.701321 3742 log.go:172] (0xc0009af3f0) (0xc000838aa0) Stream removed, broadcasting: 3\nI0526 01:04:55.701339 3742 log.go:172] (0xc0009af3f0) (0xc00082a5a0) Stream removed, broadcasting: 5\n" May 26 01:04:55.708: INFO: stdout: "" May 26 01:04:55.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1128 execpod-affinity5mx2h -- /bin/sh -x -c nc -zv -t -w 2 10.104.210.205 80' May 26 01:04:55.910: INFO: stderr: "I0526 01:04:55.834842 3763 log.go:172] (0xc000a69810) (0xc000932460) Create stream\nI0526 01:04:55.834909 3763 log.go:172] (0xc000a69810) (0xc000932460) Stream added, broadcasting: 1\nI0526 01:04:55.840078 3763 log.go:172] (0xc000a69810) Reply frame received for 1\nI0526 01:04:55.840238 3763 log.go:172] (0xc000a69810) (0xc0004e4320) Create stream\nI0526 01:04:55.840259 3763 log.go:172] (0xc000a69810) (0xc0004e4320) Stream added, broadcasting: 3\nI0526 01:04:55.841445 3763 log.go:172] (0xc000a69810) Reply frame received for 3\nI0526 01:04:55.841490 3763 log.go:172] (0xc000a69810) (0xc000400e60) Create stream\nI0526 01:04:55.841510 3763 log.go:172] (0xc000a69810) (0xc000400e60) Stream added, broadcasting: 5\nI0526 01:04:55.842498 3763 log.go:172] (0xc000a69810) Reply frame received for 5\nI0526 01:04:55.901709 3763 log.go:172] (0xc000a69810) Data frame received for 5\nI0526 01:04:55.901745 3763 log.go:172] (0xc000400e60) (5) Data frame handling\nI0526 01:04:55.901769 3763 log.go:172] (0xc000400e60) (5) Data frame sent\nI0526 01:04:55.901798 3763 log.go:172] (0xc000a69810) Data frame received for 5\nI0526 01:04:55.901823 3763 log.go:172] (0xc000400e60) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.210.205 80\nConnection to 10.104.210.205 80 port [tcp/http] succeeded!\nI0526 01:04:55.901844 3763 log.go:172] (0xc000a69810) Data frame received for 3\nI0526 01:04:55.901885 3763 log.go:172] (0xc0004e4320) (3) Data frame handling\nI0526 01:04:55.903533 3763 log.go:172] (0xc000a69810) Data frame received for 1\nI0526 01:04:55.903562 3763 log.go:172] (0xc000932460) (1) Data frame handling\nI0526 01:04:55.903577 3763 log.go:172] (0xc000932460) (1) Data frame sent\nI0526 01:04:55.903606 3763 log.go:172] (0xc000a69810) (0xc000932460) Stream removed, broadcasting: 1\nI0526 01:04:55.903639 3763 log.go:172] (0xc000a69810) Go away received\nI0526 01:04:55.903895 3763 log.go:172] (0xc000a69810) (0xc000932460) Stream removed, broadcasting: 1\nI0526 01:04:55.903910 3763 log.go:172] (0xc000a69810) (0xc0004e4320) Stream removed, broadcasting: 3\nI0526 01:04:55.903924 3763 log.go:172] (0xc000a69810) (0xc000400e60) Stream removed, broadcasting: 5\n" May 26 01:04:55.910: INFO: stdout: "" May 26 01:04:55.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1128 execpod-affinity5mx2h -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31655' May 26 01:04:56.126: INFO: stderr: "I0526 01:04:56.046911 3784 log.go:172] (0xc00003a0b0) (0xc000425a40) Create stream\nI0526 01:04:56.046985 3784 log.go:172] (0xc00003a0b0) (0xc000425a40) Stream added, broadcasting: 1\nI0526 01:04:56.048655 3784 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0526 01:04:56.048712 3784 log.go:172] (0xc00003a0b0) (0xc00016b540) Create stream\nI0526 01:04:56.048732 3784 log.go:172] (0xc00003a0b0) (0xc00016b540) Stream added, broadcasting: 3\nI0526 01:04:56.049864 3784 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0526 01:04:56.049899 3784 log.go:172] (0xc00003a0b0) (0xc000648140) Create stream\nI0526 01:04:56.049915 3784 log.go:172] (0xc00003a0b0) (0xc000648140) Stream added, broadcasting: 5\nI0526 01:04:56.050814 3784 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0526 01:04:56.118235 3784 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0526 01:04:56.118291 3784 log.go:172] (0xc00016b540) (3) Data frame handling\nI0526 01:04:56.118324 3784 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0526 01:04:56.118346 3784 log.go:172] (0xc000648140) (5) Data frame handling\nI0526 01:04:56.118372 3784 log.go:172] (0xc000648140) (5) Data frame sent\nI0526 01:04:56.118396 3784 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0526 01:04:56.118416 3784 log.go:172] (0xc000648140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31655\nConnection to 172.17.0.13 31655 port [tcp/31655] succeeded!\nI0526 01:04:56.119940 3784 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0526 01:04:56.119956 3784 log.go:172] (0xc000425a40) (1) Data frame handling\nI0526 01:04:56.119978 3784 log.go:172] (0xc000425a40) (1) Data frame sent\nI0526 01:04:56.120086 3784 log.go:172] (0xc00003a0b0) (0xc000425a40) Stream removed, broadcasting: 1\nI0526 01:04:56.120234 3784 log.go:172] (0xc00003a0b0) Go away received\nI0526 01:04:56.120366 3784 log.go:172] (0xc00003a0b0) (0xc000425a40) Stream removed, broadcasting: 1\nI0526 01:04:56.120381 3784 log.go:172] (0xc00003a0b0) (0xc00016b540) Stream removed, broadcasting: 3\nI0526 01:04:56.120392 3784 log.go:172] (0xc00003a0b0) (0xc000648140) Stream removed, broadcasting: 5\n" May 26 01:04:56.126: INFO: stdout: "" May 26 01:04:56.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1128 execpod-affinity5mx2h -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31655' May 26 01:04:56.336: INFO: stderr: "I0526 01:04:56.263426 3804 log.go:172] (0xc000a9d080) (0xc000a36320) Create stream\nI0526 01:04:56.263483 3804 log.go:172] (0xc000a9d080) (0xc000a36320) Stream added, broadcasting: 1\nI0526 01:04:56.267577 3804 log.go:172] (0xc000a9d080) Reply frame received for 1\nI0526 01:04:56.267611 3804 log.go:172] (0xc000a9d080) (0xc00065e500) Create stream\nI0526 01:04:56.267619 3804 log.go:172] (0xc000a9d080) (0xc00065e500) Stream added, broadcasting: 3\nI0526 01:04:56.268375 3804 log.go:172] (0xc000a9d080) Reply frame received for 3\nI0526 01:04:56.268412 3804 log.go:172] (0xc000a9d080) (0xc00065edc0) Create stream\nI0526 01:04:56.268420 3804 log.go:172] (0xc000a9d080) (0xc00065edc0) Stream added, broadcasting: 5\nI0526 01:04:56.269259 3804 log.go:172] (0xc000a9d080) Reply frame received for 5\nI0526 01:04:56.326250 3804 log.go:172] (0xc000a9d080) Data frame received for 5\nI0526 01:04:56.326294 3804 log.go:172] (0xc00065edc0) (5) Data frame handling\nI0526 01:04:56.326320 3804 log.go:172] (0xc00065edc0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31655\nI0526 01:04:56.327477 3804 log.go:172] (0xc000a9d080) Data frame received for 3\nI0526 01:04:56.327537 3804 log.go:172] (0xc00065e500) (3) Data frame handling\nI0526 01:04:56.327563 3804 log.go:172] (0xc000a9d080) Data frame received for 5\nI0526 01:04:56.327577 3804 log.go:172] (0xc00065edc0) (5) Data frame handling\nI0526 01:04:56.327587 3804 log.go:172] (0xc00065edc0) (5) Data frame sent\nI0526 01:04:56.327599 3804 log.go:172] (0xc000a9d080) Data frame received for 5\nI0526 01:04:56.327614 3804 log.go:172] (0xc00065edc0) (5) Data frame handling\nConnection to 172.17.0.12 31655 port [tcp/31655] succeeded!\nI0526 01:04:56.329049 3804 log.go:172] (0xc000a9d080) Data frame received for 1\nI0526 01:04:56.329295 3804 log.go:172] (0xc000a36320) (1) Data frame handling\nI0526 01:04:56.329323 3804 log.go:172] (0xc000a36320) (1) Data frame sent\nI0526 01:04:56.329350 3804 log.go:172] (0xc000a9d080) (0xc000a36320) Stream removed, broadcasting: 1\nI0526 01:04:56.329379 3804 log.go:172] (0xc000a9d080) Go away received\nI0526 01:04:56.329910 3804 log.go:172] (0xc000a9d080) (0xc000a36320) Stream removed, broadcasting: 1\nI0526 01:04:56.329937 3804 log.go:172] (0xc000a9d080) (0xc00065e500) Stream removed, broadcasting: 3\nI0526 01:04:56.329949 3804 log.go:172] (0xc000a9d080) (0xc00065edc0) Stream removed, broadcasting: 5\n" May 26 01:04:56.336: INFO: stdout: "" May 26 01:04:56.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1128 execpod-affinity5mx2h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31655/ ; done' May 26 01:04:56.648: INFO: stderr: "I0526 01:04:56.479020 3824 log.go:172] (0xc000a46c60) (0xc00035bb80) Create stream\nI0526 01:04:56.479085 3824 log.go:172] (0xc000a46c60) (0xc00035bb80) Stream added, broadcasting: 1\nI0526 01:04:56.482249 3824 log.go:172] (0xc000a46c60) Reply frame received for 1\nI0526 01:04:56.482318 3824 log.go:172] (0xc000a46c60) (0xc00023a5a0) Create stream\nI0526 01:04:56.482354 3824 log.go:172] (0xc000a46c60) (0xc00023a5a0) Stream added, broadcasting: 3\nI0526 01:04:56.483311 3824 log.go:172] (0xc000a46c60) Reply frame received for 3\nI0526 01:04:56.483343 3824 log.go:172] (0xc000a46c60) (0xc00023a820) Create stream\nI0526 01:04:56.483353 3824 log.go:172] (0xc000a46c60) (0xc00023a820) Stream added, broadcasting: 5\nI0526 01:04:56.484276 3824 log.go:172] (0xc000a46c60) Reply frame received for 5\nI0526 01:04:56.539878 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.539914 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.539926 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.539942 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.539960 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.539973 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.570111 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.570147 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.570174 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.570209 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.570249 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.570279 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.570302 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ echo\nI0526 01:04:56.570326 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.570353 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.570376 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.570387 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.570404 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.573988 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.574017 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.574050 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.575045 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.575066 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.575086 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.575125 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.575151 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.575182 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.578642 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.578661 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.578679 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.578943 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.578964 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.578980 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.578998 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.579016 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.579032 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.586155 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.586201 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.586325 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.586729 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.586748 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.586757 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.586786 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.586809 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.586829 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.590081 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.590122 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.590149 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.590902 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.590932 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.590950 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.590974 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.590989 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.591005 3824 log.go:172] (0xc00023a820) (5) Data frame sent\nI0526 01:04:56.591040 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.591055 3824 log.go:172] (0xc00023a820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.591112 3824 log.go:172] (0xc00023a820) (5) Data frame sent\nI0526 01:04:56.595594 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.595611 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.595623 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.595857 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.595870 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.595877 3824 log.go:172] (0xc00023a820) (5) Data frame sent\nI0526 01:04:56.595883 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.595890 3824 log.go:172] (0xc00023a820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.595910 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.595953 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.595967 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.595989 3824 log.go:172] (0xc00023a820) (5) Data frame sent\nI0526 01:04:56.599978 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.599998 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.600011 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.600393 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.600403 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.600412 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.600561 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.600577 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.600588 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.604345 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.604373 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.604402 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.604696 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.604708 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.604714 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.604724 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.604727 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.604732 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.608759 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.608771 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.608777 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.609323 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.609332 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.609337 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.609347 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.609351 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.609356 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.614200 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.614217 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.614230 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.614728 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.614749 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.614761 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.614795 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.614820 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.614840 3824 log.go:172] (0xc00023a820) (5) Data frame sent\nI0526 01:04:56.614861 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.614873 3824 log.go:172] (0xc00023a820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.614892 3824 log.go:172] (0xc00023a820) (5) Data frame sent\nI0526 01:04:56.618863 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.618875 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.618881 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.619479 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.619500 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.619520 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.619537 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.619552 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.619564 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.623737 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.623754 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.623795 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.624145 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.624167 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.624179 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.624217 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.624253 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.624290 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.627639 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.627659 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.627680 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.628007 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.628028 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.628040 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.628064 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.628099 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.628131 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.632552 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.632576 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.632594 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.633097 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.633365 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.633391 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.633408 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.633419 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.633435 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.636621 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.636649 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.636675 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.637041 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.637055 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.637071 3824 log.go:172] (0xc00023a820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.637375 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.637409 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.637439 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.640491 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.640504 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.640512 3824 log.go:172] (0xc00023a5a0) (3) Data frame sent\nI0526 01:04:56.641008 3824 log.go:172] (0xc000a46c60) Data frame received for 5\nI0526 01:04:56.641039 3824 log.go:172] (0xc00023a820) (5) Data frame handling\nI0526 01:04:56.641094 3824 log.go:172] (0xc000a46c60) Data frame received for 3\nI0526 01:04:56.641285 3824 log.go:172] (0xc00023a5a0) (3) Data frame handling\nI0526 01:04:56.643219 3824 log.go:172] (0xc000a46c60) Data frame received for 1\nI0526 01:04:56.643260 3824 log.go:172] (0xc00035bb80) (1) Data frame handling\nI0526 01:04:56.643286 3824 log.go:172] (0xc00035bb80) (1) Data frame sent\nI0526 01:04:56.643308 3824 log.go:172] (0xc000a46c60) (0xc00035bb80) Stream removed, broadcasting: 1\nI0526 01:04:56.643328 3824 log.go:172] (0xc000a46c60) Go away received\nI0526 01:04:56.643689 3824 log.go:172] (0xc000a46c60) (0xc00035bb80) Stream removed, broadcasting: 1\nI0526 01:04:56.643715 3824 log.go:172] (0xc000a46c60) (0xc00023a5a0) Stream removed, broadcasting: 3\nI0526 01:04:56.643727 3824 log.go:172] (0xc000a46c60) (0xc00023a820) Stream removed, broadcasting: 5\n" May 26 01:04:56.648: INFO: stdout: "\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-n6vkb\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-n6vkb\naffinity-nodeport-transition-n6vkb\naffinity-nodeport-transition-n6vkb\naffinity-nodeport-transition-2mfh7\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-2mfh7\naffinity-nodeport-transition-2mfh7\naffinity-nodeport-transition-n6vkb\naffinity-nodeport-transition-2mfh7\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-2mfh7\naffinity-nodeport-transition-2mfh7" May 26 01:04:56.648: INFO: Received response from host: May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-n6vkb May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-n6vkb May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-n6vkb May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-n6vkb May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-2mfh7 May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-2mfh7 May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-2mfh7 May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-n6vkb May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-2mfh7 May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.648: INFO: Received response from host: affinity-nodeport-transition-2mfh7 May 26 01:04:56.649: INFO: Received response from host: affinity-nodeport-transition-2mfh7 May 26 01:04:56.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1128 execpod-affinity5mx2h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31655/ ; done' May 26 01:04:56.977: INFO: stderr: "I0526 01:04:56.813618 3839 log.go:172] (0xc0009686e0) (0xc0006c6f00) Create stream\nI0526 01:04:56.813683 3839 log.go:172] (0xc0009686e0) (0xc0006c6f00) Stream added, broadcasting: 1\nI0526 01:04:56.816510 3839 log.go:172] (0xc0009686e0) Reply frame received for 1\nI0526 01:04:56.816553 3839 log.go:172] (0xc0009686e0) (0xc000544d20) Create stream\nI0526 01:04:56.816568 3839 log.go:172] (0xc0009686e0) (0xc000544d20) Stream added, broadcasting: 3\nI0526 01:04:56.817892 3839 log.go:172] (0xc0009686e0) Reply frame received for 3\nI0526 01:04:56.817942 3839 log.go:172] (0xc0009686e0) (0xc00013b680) Create stream\nI0526 01:04:56.817956 3839 log.go:172] (0xc0009686e0) (0xc00013b680) Stream added, broadcasting: 5\nI0526 01:04:56.818929 3839 log.go:172] (0xc0009686e0) Reply frame received for 5\nI0526 01:04:56.883014 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.883068 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.883097 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.883163 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.883205 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.883227 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.890281 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.890303 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.890321 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.890619 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.890634 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.890641 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.890660 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.890691 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.890716 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.895795 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.895808 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.895814 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.896369 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.896390 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.896413 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.896424 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.896449 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.896457 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.899868 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.899903 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.899933 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.900184 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.900211 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.900244 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.900265 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.900292 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.900320 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.904793 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.904822 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.904846 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.905345 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.905377 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.905390 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.905412 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.905426 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.905444 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.909613 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.909627 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.909637 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.910140 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.910160 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.910168 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.910186 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.910203 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.910215 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.913722 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.913741 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.913755 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.914028 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.914054 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.914076 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.914234 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.914256 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.914277 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.918580 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.918596 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.918608 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.919121 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.919136 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.919144 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/I0526 01:04:56.919193 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.919209 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.919224 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n\nI0526 01:04:56.919498 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.919522 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.919543 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.923859 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.923873 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.923883 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.924589 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.924621 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.924647 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.924671 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.924686 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.924710 3839 log.go:172] (0xc00013b680) (5) Data frame sent\nI0526 01:04:56.924725 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.924737 3839 log.go:172] (0xc00013b680) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.924768 3839 log.go:172] (0xc00013b680) (5) Data frame sent\nI0526 01:04:56.927907 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.927942 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.927989 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.928303 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.928327 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.928350 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0526 01:04:56.928364 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.928417 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.928432 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n http://172.17.0.13:31655/\nI0526 01:04:56.928454 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.928467 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.928480 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.932740 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.932756 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.932764 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.933340 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.933362 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.933374 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.933397 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.933411 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.933425 3839 log.go:172] (0xc00013b680) (5) Data frame sent\nI0526 01:04:56.933440 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.933453 3839 log.go:172] (0xc00013b680) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.933486 3839 log.go:172] (0xc00013b680) (5) Data frame sent\nI0526 01:04:56.938096 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.938132 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.938159 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0526 01:04:56.938192 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.938213 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.938234 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.938303 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.938334 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.938350 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.938372 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.938384 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.938399 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n http://172.17.0.13:31655/\nI0526 01:04:56.941913 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.941938 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.941960 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.942190 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.942225 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.942250 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.942275 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.942289 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.942315 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.947072 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.947093 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.947110 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.947670 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.947691 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.947716 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.947730 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.947747 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.947759 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.954918 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.954962 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.954991 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.956121 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.956152 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.956179 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.956211 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.956242 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.956268 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.962086 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.962123 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.962171 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.963156 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.963181 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.963191 3839 log.go:172] (0xc00013b680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31655/\nI0526 01:04:56.963206 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.963214 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.963222 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.968859 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.968900 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.968921 3839 log.go:172] (0xc000544d20) (3) Data frame sent\nI0526 01:04:56.969875 3839 log.go:172] (0xc0009686e0) Data frame received for 5\nI0526 01:04:56.969896 3839 log.go:172] (0xc00013b680) (5) Data frame handling\nI0526 01:04:56.970050 3839 log.go:172] (0xc0009686e0) Data frame received for 3\nI0526 01:04:56.970076 3839 log.go:172] (0xc000544d20) (3) Data frame handling\nI0526 01:04:56.971706 3839 log.go:172] (0xc0009686e0) Data frame received for 1\nI0526 01:04:56.971724 3839 log.go:172] (0xc0006c6f00) (1) Data frame handling\nI0526 01:04:56.971742 3839 log.go:172] (0xc0006c6f00) (1) Data frame sent\nI0526 01:04:56.971934 3839 log.go:172] (0xc0009686e0) (0xc0006c6f00) Stream removed, broadcasting: 1\nI0526 01:04:56.972001 3839 log.go:172] (0xc0009686e0) Go away received\nI0526 01:04:56.972368 3839 log.go:172] (0xc0009686e0) (0xc0006c6f00) Stream removed, broadcasting: 1\nI0526 01:04:56.972401 3839 log.go:172] (0xc0009686e0) (0xc000544d20) Stream removed, broadcasting: 3\nI0526 01:04:56.972419 3839 log.go:172] (0xc0009686e0) (0xc00013b680) Stream removed, broadcasting: 5\n" May 26 01:04:56.978: INFO: stdout: "\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps\naffinity-nodeport-transition-jmpps" May 26 01:04:56.978: INFO: Received response from host: May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Received response from host: affinity-nodeport-transition-jmpps May 26 01:04:56.978: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-1128, will wait for the garbage collector to delete the pods May 26 01:04:57.182: INFO: Deleting ReplicationController affinity-nodeport-transition took: 94.64472ms May 26 01:04:57.582: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.246465ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:05:05.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1128" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:21.145 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":282,"skipped":4683,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:05:05.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6797 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6797 STEP: creating replication controller externalsvc in namespace services-6797 I0526 01:05:05.557487 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6797, replica count: 2 I0526 01:05:08.607924 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 01:05:11.608194 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 26 01:05:11.669: INFO: Creating new exec pod May 26 01:05:15.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6797 execpod4xpqm -- /bin/sh -x -c nslookup nodeport-service' May 26 01:05:15.980: INFO: stderr: "I0526 01:05:15.848943 3860 log.go:172] (0xc0006840b0) (0xc0004b2fa0) Create stream\nI0526 01:05:15.849020 3860 log.go:172] (0xc0006840b0) (0xc0004b2fa0) Stream added, broadcasting: 1\nI0526 01:05:15.852459 3860 log.go:172] (0xc0006840b0) Reply frame received for 1\nI0526 01:05:15.852500 3860 log.go:172] (0xc0006840b0) (0xc00054e3c0) Create stream\nI0526 01:05:15.852511 3860 log.go:172] (0xc0006840b0) (0xc00054e3c0) Stream added, broadcasting: 3\nI0526 01:05:15.854031 3860 log.go:172] (0xc0006840b0) Reply frame received for 3\nI0526 01:05:15.854074 3860 log.go:172] (0xc0006840b0) (0xc00054ea00) Create stream\nI0526 01:05:15.854088 3860 log.go:172] (0xc0006840b0) (0xc00054ea00) Stream added, broadcasting: 5\nI0526 01:05:15.855112 3860 log.go:172] (0xc0006840b0) Reply frame received for 5\nI0526 01:05:15.929986 3860 log.go:172] (0xc0006840b0) Data frame received for 5\nI0526 01:05:15.930009 3860 log.go:172] (0xc00054ea00) (5) Data frame handling\nI0526 01:05:15.930020 3860 log.go:172] (0xc00054ea00) (5) Data frame sent\n+ nslookup nodeport-service\nI0526 01:05:15.971319 3860 log.go:172] (0xc0006840b0) Data frame received for 3\nI0526 01:05:15.971354 3860 log.go:172] (0xc00054e3c0) (3) Data frame handling\nI0526 01:05:15.971386 3860 log.go:172] (0xc00054e3c0) (3) Data frame sent\nI0526 01:05:15.972345 3860 log.go:172] (0xc0006840b0) Data frame received for 3\nI0526 01:05:15.972383 3860 log.go:172] (0xc00054e3c0) (3) Data frame handling\nI0526 01:05:15.972422 3860 log.go:172] (0xc00054e3c0) (3) Data frame sent\nI0526 01:05:15.973007 3860 log.go:172] (0xc0006840b0) Data frame received for 5\nI0526 01:05:15.973040 3860 log.go:172] (0xc00054ea00) (5) Data frame handling\nI0526 01:05:15.973063 3860 log.go:172] (0xc0006840b0) Data frame received for 3\nI0526 01:05:15.973077 3860 log.go:172] (0xc00054e3c0) (3) Data frame handling\nI0526 01:05:15.975283 3860 log.go:172] (0xc0006840b0) Data frame received for 1\nI0526 01:05:15.975321 3860 log.go:172] (0xc0004b2fa0) (1) Data frame handling\nI0526 01:05:15.975340 3860 log.go:172] (0xc0004b2fa0) (1) Data frame sent\nI0526 01:05:15.975372 3860 log.go:172] (0xc0006840b0) (0xc0004b2fa0) Stream removed, broadcasting: 1\nI0526 01:05:15.975413 3860 log.go:172] (0xc0006840b0) Go away received\nI0526 01:05:15.975672 3860 log.go:172] (0xc0006840b0) (0xc0004b2fa0) Stream removed, broadcasting: 1\nI0526 01:05:15.975694 3860 log.go:172] (0xc0006840b0) (0xc00054e3c0) Stream removed, broadcasting: 3\nI0526 01:05:15.975706 3860 log.go:172] (0xc0006840b0) (0xc00054ea00) Stream removed, broadcasting: 5\n" May 26 01:05:15.980: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6797.svc.cluster.local\tcanonical name = externalsvc.services-6797.svc.cluster.local.\nName:\texternalsvc.services-6797.svc.cluster.local\nAddress: 10.97.229.116\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6797, will wait for the garbage collector to delete the pods May 26 01:05:16.040: INFO: Deleting ReplicationController externalsvc took: 6.831773ms May 26 01:05:16.340: INFO: Terminating ReplicationController externalsvc pods took: 300.219578ms May 26 01:05:25.405: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:05:25.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6797" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.138 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":283,"skipped":4694,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:05:25.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 26 01:05:25.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:25.619: INFO: Number of nodes with available pods: 0 May 26 01:05:25.619: INFO: Node latest-worker is running more than one daemon pod May 26 01:05:26.625: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:26.629: INFO: Number of nodes with available pods: 0 May 26 01:05:26.629: INFO: Node latest-worker is running more than one daemon pod May 26 01:05:27.690: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:27.694: INFO: Number of nodes with available pods: 0 May 26 01:05:27.694: INFO: Node latest-worker is running more than one daemon pod May 26 01:05:28.786: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:28.790: INFO: Number of nodes with available pods: 0 May 26 01:05:28.790: INFO: Node latest-worker is running more than one daemon pod May 26 01:05:29.625: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:29.629: INFO: Number of nodes with available pods: 1 May 26 01:05:29.630: INFO: Node latest-worker is running more than one daemon pod May 26 01:05:30.649: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:30.660: INFO: Number of nodes with available pods: 2 May 26 01:05:30.660: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 26 01:05:30.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:30.768: INFO: Number of nodes with available pods: 1 May 26 01:05:30.768: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:31.894: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:31.918: INFO: Number of nodes with available pods: 1 May 26 01:05:31.918: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:32.774: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:32.778: INFO: Number of nodes with available pods: 1 May 26 01:05:32.778: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:33.774: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:33.777: INFO: Number of nodes with available pods: 1 May 26 01:05:33.777: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:34.785: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:34.789: INFO: Number of nodes with available pods: 1 May 26 01:05:34.789: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:35.774: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:35.778: INFO: Number of nodes with available pods: 1 May 26 01:05:35.778: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:36.774: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:36.803: INFO: Number of nodes with available pods: 1 May 26 01:05:36.803: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:37.774: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:37.776: INFO: Number of nodes with available pods: 1 May 26 01:05:37.777: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:38.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:38.776: INFO: Number of nodes with available pods: 1 May 26 01:05:38.776: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:39.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:39.778: INFO: Number of nodes with available pods: 1 May 26 01:05:39.779: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:40.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:40.795: INFO: Number of nodes with available pods: 1 May 26 01:05:40.795: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:41.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:41.779: INFO: Number of nodes with available pods: 1 May 26 01:05:41.779: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:42.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:42.779: INFO: Number of nodes with available pods: 1 May 26 01:05:42.779: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:43.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:43.778: INFO: Number of nodes with available pods: 1 May 26 01:05:43.778: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:44.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:44.775: INFO: Number of nodes with available pods: 1 May 26 01:05:44.775: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:45.774: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:45.779: INFO: Number of nodes with available pods: 1 May 26 01:05:45.779: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:46.774: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:46.778: INFO: Number of nodes with available pods: 1 May 26 01:05:46.778: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:47.779: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:47.782: INFO: Number of nodes with available pods: 1 May 26 01:05:47.782: INFO: Node latest-worker2 is running more than one daemon pod May 26 01:05:48.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 01:05:48.779: INFO: Number of nodes with available pods: 2 May 26 01:05:48.779: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2244, will wait for the garbage collector to delete the pods May 26 01:05:48.842: INFO: Deleting DaemonSet.extensions daemon-set took: 6.910529ms May 26 01:05:49.143: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.266561ms May 26 01:05:52.148: INFO: Number of nodes with available pods: 0 May 26 01:05:52.148: INFO: Number of running nodes: 0, number of available pods: 0 May 26 01:05:52.154: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2244/daemonsets","resourceVersion":"7702376"},"items":null} May 26 01:05:52.156: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2244/pods","resourceVersion":"7702376"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:05:52.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2244" for this suite. • [SLOW TEST:26.691 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":284,"skipped":4699,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:05:52.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 01:05:52.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 26 01:05:52.832: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-26T01:05:52Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-26T01:05:52Z]] name:name1 resourceVersion:7702388 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:97f5e18f-ed52-48fa-b471-9f7fddd62586] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 26 01:06:02.839: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-26T01:06:02Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-26T01:06:02Z]] name:name2 resourceVersion:7702441 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1b1d903f-d6f3-4bd7-93f4-2137eea33524] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 26 01:06:12.846: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-26T01:05:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-26T01:06:12Z]] name:name1 resourceVersion:7702471 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:97f5e18f-ed52-48fa-b471-9f7fddd62586] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 26 01:06:22.854: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-26T01:06:02Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-26T01:06:22Z]] name:name2 resourceVersion:7702501 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1b1d903f-d6f3-4bd7-93f4-2137eea33524] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 26 01:06:32.864: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-26T01:05:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-26T01:06:12Z]] name:name1 resourceVersion:7702531 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:97f5e18f-ed52-48fa-b471-9f7fddd62586] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 26 01:06:42.871: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-26T01:06:02Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-26T01:06:22Z]] name:name2 resourceVersion:7702561 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1b1d903f-d6f3-4bd7-93f4-2137eea33524] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:06:53.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5354" for this suite. • [SLOW TEST:61.222 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":285,"skipped":4716,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:06:53.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3103 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-3103 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3103 May 26 01:06:53.541: INFO: Found 0 stateful pods, waiting for 1 May 26 01:07:03.546: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 26 01:07:03.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 01:07:03.798: INFO: stderr: "I0526 01:07:03.677594 3881 log.go:172] (0xc000acc000) (0xc000426dc0) Create stream\nI0526 01:07:03.677659 3881 log.go:172] (0xc000acc000) (0xc000426dc0) Stream added, broadcasting: 1\nI0526 01:07:03.679853 3881 log.go:172] (0xc000acc000) Reply frame received for 1\nI0526 01:07:03.679890 3881 log.go:172] (0xc000acc000) (0xc000346320) Create stream\nI0526 01:07:03.679899 3881 log.go:172] (0xc000acc000) (0xc000346320) Stream added, broadcasting: 3\nI0526 01:07:03.680786 3881 log.go:172] (0xc000acc000) Reply frame received for 3\nI0526 01:07:03.680813 3881 log.go:172] (0xc000acc000) (0xc000346d20) Create stream\nI0526 01:07:03.680825 3881 log.go:172] (0xc000acc000) (0xc000346d20) Stream added, broadcasting: 5\nI0526 01:07:03.681849 3881 log.go:172] (0xc000acc000) Reply frame received for 5\nI0526 01:07:03.758255 3881 log.go:172] (0xc000acc000) Data frame received for 5\nI0526 01:07:03.758284 3881 log.go:172] (0xc000346d20) (5) Data frame handling\nI0526 01:07:03.758307 3881 log.go:172] (0xc000346d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 01:07:03.789717 3881 log.go:172] (0xc000acc000) Data frame received for 3\nI0526 01:07:03.789753 3881 log.go:172] (0xc000346320) (3) Data frame handling\nI0526 01:07:03.789792 3881 log.go:172] (0xc000346320) (3) Data frame sent\nI0526 01:07:03.790159 3881 log.go:172] (0xc000acc000) Data frame received for 3\nI0526 01:07:03.790178 3881 log.go:172] (0xc000346320) (3) Data frame handling\nI0526 01:07:03.790541 3881 log.go:172] (0xc000acc000) Data frame received for 5\nI0526 01:07:03.790563 3881 log.go:172] (0xc000346d20) (5) Data frame handling\nI0526 01:07:03.792681 3881 log.go:172] (0xc000acc000) Data frame received for 1\nI0526 01:07:03.792722 3881 log.go:172] (0xc000426dc0) (1) Data frame handling\nI0526 01:07:03.792758 3881 log.go:172] (0xc000426dc0) (1) Data frame sent\nI0526 01:07:03.792794 3881 log.go:172] (0xc000acc000) (0xc000426dc0) Stream removed, broadcasting: 1\nI0526 01:07:03.792831 3881 log.go:172] (0xc000acc000) Go away received\nI0526 01:07:03.793441 3881 log.go:172] (0xc000acc000) (0xc000426dc0) Stream removed, broadcasting: 1\nI0526 01:07:03.793468 3881 log.go:172] (0xc000acc000) (0xc000346320) Stream removed, broadcasting: 3\nI0526 01:07:03.793486 3881 log.go:172] (0xc000acc000) (0xc000346d20) Stream removed, broadcasting: 5\n" May 26 01:07:03.798: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 01:07:03.798: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 01:07:03.802: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 26 01:07:13.806: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 26 01:07:13.806: INFO: Waiting for statefulset status.replicas updated to 0 May 26 01:07:13.844: INFO: POD NODE PHASE GRACE CONDITIONS May 26 01:07:13.844: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC }] May 26 01:07:13.844: INFO: May 26 01:07:13.844: INFO: StatefulSet ss has not reached scale 3, at 1 May 26 01:07:14.848: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.973995531s May 26 01:07:15.853: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969975303s May 26 01:07:16.858: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.964559719s May 26 01:07:17.871: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.960016277s May 26 01:07:18.876: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.947159952s May 26 01:07:19.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.942101533s May 26 01:07:20.887: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.936263771s May 26 01:07:21.893: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.930398157s May 26 01:07:22.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 924.770878ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3103 May 26 01:07:23.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 01:07:24.158: INFO: stderr: "I0526 01:07:24.061351 3901 log.go:172] (0xc00091f3f0) (0xc00083fea0) Create stream\nI0526 01:07:24.061413 3901 log.go:172] (0xc00091f3f0) (0xc00083fea0) Stream added, broadcasting: 1\nI0526 01:07:24.065904 3901 log.go:172] (0xc00091f3f0) Reply frame received for 1\nI0526 01:07:24.065949 3901 log.go:172] (0xc00091f3f0) (0xc000834f00) Create stream\nI0526 01:07:24.065964 3901 log.go:172] (0xc00091f3f0) (0xc000834f00) Stream added, broadcasting: 3\nI0526 01:07:24.066973 3901 log.go:172] (0xc00091f3f0) Reply frame received for 3\nI0526 01:07:24.067006 3901 log.go:172] (0xc00091f3f0) (0xc000604d20) Create stream\nI0526 01:07:24.067021 3901 log.go:172] (0xc00091f3f0) (0xc000604d20) Stream added, broadcasting: 5\nI0526 01:07:24.067891 3901 log.go:172] (0xc00091f3f0) Reply frame received for 5\nI0526 01:07:24.148580 3901 log.go:172] (0xc00091f3f0) Data frame received for 3\nI0526 01:07:24.148610 3901 log.go:172] (0xc000834f00) (3) Data frame handling\nI0526 01:07:24.148621 3901 log.go:172] (0xc000834f00) (3) Data frame sent\nI0526 01:07:24.148642 3901 log.go:172] (0xc00091f3f0) Data frame received for 5\nI0526 01:07:24.148649 3901 log.go:172] (0xc000604d20) (5) Data frame handling\nI0526 01:07:24.148657 3901 log.go:172] (0xc000604d20) (5) Data frame sent\nI0526 01:07:24.148664 3901 log.go:172] (0xc00091f3f0) Data frame received for 5\nI0526 01:07:24.148671 3901 log.go:172] (0xc000604d20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 01:07:24.148796 3901 log.go:172] (0xc00091f3f0) Data frame received for 3\nI0526 01:07:24.148823 3901 log.go:172] (0xc000834f00) (3) Data frame handling\nI0526 01:07:24.150483 3901 log.go:172] (0xc00091f3f0) Data frame received for 1\nI0526 01:07:24.150510 3901 log.go:172] (0xc00083fea0) (1) Data frame handling\nI0526 01:07:24.150528 3901 log.go:172] (0xc00083fea0) (1) Data frame sent\nI0526 01:07:24.150547 3901 log.go:172] (0xc00091f3f0) (0xc00083fea0) Stream removed, broadcasting: 1\nI0526 01:07:24.150586 3901 log.go:172] (0xc00091f3f0) Go away received\nI0526 01:07:24.151135 3901 log.go:172] (0xc00091f3f0) (0xc00083fea0) Stream removed, broadcasting: 1\nI0526 01:07:24.151172 3901 log.go:172] (0xc00091f3f0) (0xc000834f00) Stream removed, broadcasting: 3\nI0526 01:07:24.151198 3901 log.go:172] (0xc00091f3f0) (0xc000604d20) Stream removed, broadcasting: 5\n" May 26 01:07:24.159: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 01:07:24.159: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 01:07:24.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 01:07:24.394: INFO: stderr: "I0526 01:07:24.306649 3922 log.go:172] (0xc000c5f3f0) (0xc0006c74a0) Create stream\nI0526 01:07:24.306734 3922 log.go:172] (0xc000c5f3f0) (0xc0006c74a0) Stream added, broadcasting: 1\nI0526 01:07:24.312014 3922 log.go:172] (0xc000c5f3f0) Reply frame received for 1\nI0526 01:07:24.312059 3922 log.go:172] (0xc000c5f3f0) (0xc0006a2a00) Create stream\nI0526 01:07:24.312070 3922 log.go:172] (0xc000c5f3f0) (0xc0006a2a00) Stream added, broadcasting: 3\nI0526 01:07:24.312888 3922 log.go:172] (0xc000c5f3f0) Reply frame received for 3\nI0526 01:07:24.312943 3922 log.go:172] (0xc000c5f3f0) (0xc000699c20) Create stream\nI0526 01:07:24.312970 3922 log.go:172] (0xc000c5f3f0) (0xc000699c20) Stream added, broadcasting: 5\nI0526 01:07:24.314003 3922 log.go:172] (0xc000c5f3f0) Reply frame received for 5\nI0526 01:07:24.385661 3922 log.go:172] (0xc000c5f3f0) Data frame received for 5\nI0526 01:07:24.385695 3922 log.go:172] (0xc000699c20) (5) Data frame handling\nI0526 01:07:24.385707 3922 log.go:172] (0xc000699c20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0526 01:07:24.385762 3922 log.go:172] (0xc000c5f3f0) Data frame received for 3\nI0526 01:07:24.385796 3922 log.go:172] (0xc0006a2a00) (3) Data frame handling\nI0526 01:07:24.385853 3922 log.go:172] (0xc0006a2a00) (3) Data frame sent\nI0526 01:07:24.385898 3922 log.go:172] (0xc000c5f3f0) Data frame received for 3\nI0526 01:07:24.385916 3922 log.go:172] (0xc0006a2a00) (3) Data frame handling\nI0526 01:07:24.385941 3922 log.go:172] (0xc000c5f3f0) Data frame received for 5\nI0526 01:07:24.385971 3922 log.go:172] (0xc000699c20) (5) Data frame handling\nI0526 01:07:24.387784 3922 log.go:172] (0xc000c5f3f0) Data frame received for 1\nI0526 01:07:24.387802 3922 log.go:172] (0xc0006c74a0) (1) Data frame handling\nI0526 01:07:24.387817 3922 log.go:172] (0xc0006c74a0) (1) Data frame sent\nI0526 01:07:24.387831 3922 log.go:172] (0xc000c5f3f0) (0xc0006c74a0) Stream removed, broadcasting: 1\nI0526 01:07:24.388155 3922 log.go:172] (0xc000c5f3f0) (0xc0006c74a0) Stream removed, broadcasting: 1\nI0526 01:07:24.388174 3922 log.go:172] (0xc000c5f3f0) (0xc0006a2a00) Stream removed, broadcasting: 3\nI0526 01:07:24.388305 3922 log.go:172] (0xc000c5f3f0) (0xc000699c20) Stream removed, broadcasting: 5\nI0526 01:07:24.388370 3922 log.go:172] (0xc000c5f3f0) Go away received\n" May 26 01:07:24.394: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 01:07:24.394: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 01:07:24.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 01:07:24.605: INFO: stderr: "I0526 01:07:24.512814 3944 log.go:172] (0xc000a30790) (0xc0005f1e00) Create stream\nI0526 01:07:24.513019 3944 log.go:172] (0xc000a30790) (0xc0005f1e00) Stream added, broadcasting: 1\nI0526 01:07:24.518527 3944 log.go:172] (0xc000a30790) Reply frame received for 1\nI0526 01:07:24.518569 3944 log.go:172] (0xc000a30790) (0xc0006dcf00) Create stream\nI0526 01:07:24.518587 3944 log.go:172] (0xc000a30790) (0xc0006dcf00) Stream added, broadcasting: 3\nI0526 01:07:24.519512 3944 log.go:172] (0xc000a30790) Reply frame received for 3\nI0526 01:07:24.519563 3944 log.go:172] (0xc000a30790) (0xc0006ddea0) Create stream\nI0526 01:07:24.519578 3944 log.go:172] (0xc000a30790) (0xc0006ddea0) Stream added, broadcasting: 5\nI0526 01:07:24.520412 3944 log.go:172] (0xc000a30790) Reply frame received for 5\nI0526 01:07:24.597654 3944 log.go:172] (0xc000a30790) Data frame received for 3\nI0526 01:07:24.597720 3944 log.go:172] (0xc0006dcf00) (3) Data frame handling\nI0526 01:07:24.597746 3944 log.go:172] (0xc0006dcf00) (3) Data frame sent\nI0526 01:07:24.597768 3944 log.go:172] (0xc000a30790) Data frame received for 3\nI0526 01:07:24.597787 3944 log.go:172] (0xc0006dcf00) (3) Data frame handling\nI0526 01:07:24.597828 3944 log.go:172] (0xc000a30790) Data frame received for 5\nI0526 01:07:24.597861 3944 log.go:172] (0xc0006ddea0) (5) Data frame handling\nI0526 01:07:24.597891 3944 log.go:172] (0xc0006ddea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0526 01:07:24.597919 3944 log.go:172] (0xc000a30790) Data frame received for 5\nI0526 01:07:24.597942 3944 log.go:172] (0xc0006ddea0) (5) Data frame handling\nI0526 01:07:24.599346 3944 log.go:172] (0xc000a30790) Data frame received for 1\nI0526 01:07:24.599378 3944 log.go:172] (0xc0005f1e00) (1) Data frame handling\nI0526 01:07:24.599397 3944 log.go:172] (0xc0005f1e00) (1) Data frame sent\nI0526 01:07:24.599415 3944 log.go:172] (0xc000a30790) (0xc0005f1e00) Stream removed, broadcasting: 1\nI0526 01:07:24.599433 3944 log.go:172] (0xc000a30790) Go away received\nI0526 01:07:24.599972 3944 log.go:172] (0xc000a30790) (0xc0005f1e00) Stream removed, broadcasting: 1\nI0526 01:07:24.599995 3944 log.go:172] (0xc000a30790) (0xc0006dcf00) Stream removed, broadcasting: 3\nI0526 01:07:24.600007 3944 log.go:172] (0xc000a30790) (0xc0006ddea0) Stream removed, broadcasting: 5\n" May 26 01:07:24.605: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 01:07:24.605: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 01:07:24.642: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 26 01:07:34.647: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 26 01:07:34.647: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 26 01:07:34.647: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 26 01:07:34.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 01:07:34.991: INFO: stderr: "I0526 01:07:34.837568 3965 log.go:172] (0xc0007de8f0) (0xc0003981e0) Create stream\nI0526 01:07:34.837630 3965 log.go:172] (0xc0007de8f0) (0xc0003981e0) Stream added, broadcasting: 1\nI0526 01:07:34.840187 3965 log.go:172] (0xc0007de8f0) Reply frame received for 1\nI0526 01:07:34.840250 3965 log.go:172] (0xc0007de8f0) (0xc000ae2000) Create stream\nI0526 01:07:34.840277 3965 log.go:172] (0xc0007de8f0) (0xc000ae2000) Stream added, broadcasting: 3\nI0526 01:07:34.841046 3965 log.go:172] (0xc0007de8f0) Reply frame received for 3\nI0526 01:07:34.841063 3965 log.go:172] (0xc0007de8f0) (0xc000ae2140) Create stream\nI0526 01:07:34.841070 3965 log.go:172] (0xc0007de8f0) (0xc000ae2140) Stream added, broadcasting: 5\nI0526 01:07:34.842324 3965 log.go:172] (0xc0007de8f0) Reply frame received for 5\nI0526 01:07:34.985545 3965 log.go:172] (0xc0007de8f0) Data frame received for 3\nI0526 01:07:34.985600 3965 log.go:172] (0xc000ae2000) (3) Data frame handling\nI0526 01:07:34.985615 3965 log.go:172] (0xc000ae2000) (3) Data frame sent\nI0526 01:07:34.985625 3965 log.go:172] (0xc0007de8f0) Data frame received for 3\nI0526 01:07:34.985633 3965 log.go:172] (0xc000ae2000) (3) Data frame handling\nI0526 01:07:34.985673 3965 log.go:172] (0xc0007de8f0) Data frame received for 5\nI0526 01:07:34.985685 3965 log.go:172] (0xc000ae2140) (5) Data frame handling\nI0526 01:07:34.985701 3965 log.go:172] (0xc000ae2140) (5) Data frame sent\nI0526 01:07:34.985711 3965 log.go:172] (0xc0007de8f0) Data frame received for 5\nI0526 01:07:34.985718 3965 log.go:172] (0xc000ae2140) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 01:07:34.987019 3965 log.go:172] (0xc0007de8f0) Data frame received for 1\nI0526 01:07:34.987034 3965 log.go:172] (0xc0003981e0) (1) Data frame handling\nI0526 01:07:34.987041 3965 log.go:172] (0xc0003981e0) (1) Data frame sent\nI0526 01:07:34.987049 3965 log.go:172] (0xc0007de8f0) (0xc0003981e0) Stream removed, broadcasting: 1\nI0526 01:07:34.987082 3965 log.go:172] (0xc0007de8f0) Go away received\nI0526 01:07:34.987352 3965 log.go:172] (0xc0007de8f0) (0xc0003981e0) Stream removed, broadcasting: 1\nI0526 01:07:34.987368 3965 log.go:172] (0xc0007de8f0) (0xc000ae2000) Stream removed, broadcasting: 3\nI0526 01:07:34.987378 3965 log.go:172] (0xc0007de8f0) (0xc000ae2140) Stream removed, broadcasting: 5\n" May 26 01:07:34.991: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 01:07:34.991: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 01:07:34.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 01:07:35.295: INFO: stderr: "I0526 01:07:35.179657 3986 log.go:172] (0xc000925130) (0xc000bde640) Create stream\nI0526 01:07:35.179767 3986 log.go:172] (0xc000925130) (0xc000bde640) Stream added, broadcasting: 1\nI0526 01:07:35.185922 3986 log.go:172] (0xc000925130) Reply frame received for 1\nI0526 01:07:35.185969 3986 log.go:172] (0xc000925130) (0xc00041e500) Create stream\nI0526 01:07:35.185981 3986 log.go:172] (0xc000925130) (0xc00041e500) Stream added, broadcasting: 3\nI0526 01:07:35.187106 3986 log.go:172] (0xc000925130) Reply frame received for 3\nI0526 01:07:35.187161 3986 log.go:172] (0xc000925130) (0xc000151540) Create stream\nI0526 01:07:35.187180 3986 log.go:172] (0xc000925130) (0xc000151540) Stream added, broadcasting: 5\nI0526 01:07:35.188136 3986 log.go:172] (0xc000925130) Reply frame received for 5\nI0526 01:07:35.245933 3986 log.go:172] (0xc000925130) Data frame received for 5\nI0526 01:07:35.245965 3986 log.go:172] (0xc000151540) (5) Data frame handling\nI0526 01:07:35.245989 3986 log.go:172] (0xc000151540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 01:07:35.286205 3986 log.go:172] (0xc000925130) Data frame received for 3\nI0526 01:07:35.286225 3986 log.go:172] (0xc00041e500) (3) Data frame handling\nI0526 01:07:35.286232 3986 log.go:172] (0xc00041e500) (3) Data frame sent\nI0526 01:07:35.286237 3986 log.go:172] (0xc000925130) Data frame received for 3\nI0526 01:07:35.286242 3986 log.go:172] (0xc00041e500) (3) Data frame handling\nI0526 01:07:35.286548 3986 log.go:172] (0xc000925130) Data frame received for 5\nI0526 01:07:35.286566 3986 log.go:172] (0xc000151540) (5) Data frame handling\nI0526 01:07:35.288025 3986 log.go:172] (0xc000925130) Data frame received for 1\nI0526 01:07:35.288035 3986 log.go:172] (0xc000bde640) (1) Data frame handling\nI0526 01:07:35.288226 3986 log.go:172] (0xc000bde640) (1) Data frame sent\nI0526 01:07:35.288242 3986 log.go:172] (0xc000925130) (0xc000bde640) Stream removed, broadcasting: 1\nI0526 01:07:35.288251 3986 log.go:172] (0xc000925130) Go away received\nI0526 01:07:35.288550 3986 log.go:172] (0xc000925130) (0xc000bde640) Stream removed, broadcasting: 1\nI0526 01:07:35.288578 3986 log.go:172] (0xc000925130) (0xc00041e500) Stream removed, broadcasting: 3\nI0526 01:07:35.288605 3986 log.go:172] (0xc000925130) (0xc000151540) Stream removed, broadcasting: 5\n" May 26 01:07:35.295: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 01:07:35.295: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 01:07:35.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 01:07:35.585: INFO: stderr: "I0526 01:07:35.472537 4007 log.go:172] (0xc000aa8000) (0xc0003d4d20) Create stream\nI0526 01:07:35.472632 4007 log.go:172] (0xc000aa8000) (0xc0003d4d20) Stream added, broadcasting: 1\nI0526 01:07:35.476784 4007 log.go:172] (0xc000aa8000) Reply frame received for 1\nI0526 01:07:35.476842 4007 log.go:172] (0xc000aa8000) (0xc000156000) Create stream\nI0526 01:07:35.476862 4007 log.go:172] (0xc000aa8000) (0xc000156000) Stream added, broadcasting: 3\nI0526 01:07:35.478173 4007 log.go:172] (0xc000aa8000) Reply frame received for 3\nI0526 01:07:35.478226 4007 log.go:172] (0xc000aa8000) (0xc000306e60) Create stream\nI0526 01:07:35.478246 4007 log.go:172] (0xc000aa8000) (0xc000306e60) Stream added, broadcasting: 5\nI0526 01:07:35.479635 4007 log.go:172] (0xc000aa8000) Reply frame received for 5\nI0526 01:07:35.541622 4007 log.go:172] (0xc000aa8000) Data frame received for 5\nI0526 01:07:35.541646 4007 log.go:172] (0xc000306e60) (5) Data frame handling\nI0526 01:07:35.541658 4007 log.go:172] (0xc000306e60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 01:07:35.577877 4007 log.go:172] (0xc000aa8000) Data frame received for 3\nI0526 01:07:35.577916 4007 log.go:172] (0xc000156000) (3) Data frame handling\nI0526 01:07:35.577938 4007 log.go:172] (0xc000156000) (3) Data frame sent\nI0526 01:07:35.578241 4007 log.go:172] (0xc000aa8000) Data frame received for 5\nI0526 01:07:35.578270 4007 log.go:172] (0xc000306e60) (5) Data frame handling\nI0526 01:07:35.578298 4007 log.go:172] (0xc000aa8000) Data frame received for 3\nI0526 01:07:35.578317 4007 log.go:172] (0xc000156000) (3) Data frame handling\nI0526 01:07:35.579838 4007 log.go:172] (0xc000aa8000) Data frame received for 1\nI0526 01:07:35.579857 4007 log.go:172] (0xc0003d4d20) (1) Data frame handling\nI0526 01:07:35.579873 4007 log.go:172] (0xc0003d4d20) (1) Data frame sent\nI0526 01:07:35.580028 4007 log.go:172] (0xc000aa8000) (0xc0003d4d20) Stream removed, broadcasting: 1\nI0526 01:07:35.580189 4007 log.go:172] (0xc000aa8000) Go away received\nI0526 01:07:35.580259 4007 log.go:172] (0xc000aa8000) (0xc0003d4d20) Stream removed, broadcasting: 1\nI0526 01:07:35.580276 4007 log.go:172] (0xc000aa8000) (0xc000156000) Stream removed, broadcasting: 3\nI0526 01:07:35.580283 4007 log.go:172] (0xc000aa8000) (0xc000306e60) Stream removed, broadcasting: 5\n" May 26 01:07:35.585: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 01:07:35.585: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 01:07:35.585: INFO: Waiting for statefulset status.replicas updated to 0 May 26 01:07:35.588: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 26 01:07:45.596: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 26 01:07:45.596: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 26 01:07:45.596: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 26 01:07:45.628: INFO: POD NODE PHASE GRACE CONDITIONS May 26 01:07:45.628: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC }] May 26 01:07:45.628: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:45.628: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:45.628: INFO: May 26 01:07:45.628: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 01:07:46.632: INFO: POD NODE PHASE GRACE CONDITIONS May 26 01:07:46.632: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC }] May 26 01:07:46.632: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:46.633: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:46.633: INFO: May 26 01:07:46.633: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 01:07:47.794: INFO: POD NODE PHASE GRACE CONDITIONS May 26 01:07:47.794: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC }] May 26 01:07:47.794: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:47.794: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:47.794: INFO: May 26 01:07:47.794: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 01:07:48.800: INFO: POD NODE PHASE GRACE CONDITIONS May 26 01:07:48.800: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC }] May 26 01:07:48.800: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:48.800: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:48.800: INFO: May 26 01:07:48.800: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 01:07:49.810: INFO: POD NODE PHASE GRACE CONDITIONS May 26 01:07:49.810: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC }] May 26 01:07:49.810: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:49.810: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:49.810: INFO: May 26 01:07:49.810: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 01:07:50.815: INFO: POD NODE PHASE GRACE CONDITIONS May 26 01:07:50.815: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC }] May 26 01:07:50.815: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:50.815: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:50.815: INFO: May 26 01:07:50.815: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 01:07:51.819: INFO: POD NODE PHASE GRACE CONDITIONS May 26 01:07:51.819: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC }] May 26 01:07:51.819: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:51.819: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:51.819: INFO: May 26 01:07:51.819: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 01:07:52.823: INFO: POD NODE PHASE GRACE CONDITIONS May 26 01:07:52.823: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC }] May 26 01:07:52.823: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:52.823: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:52.823: INFO: May 26 01:07:52.823: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 01:07:53.829: INFO: POD NODE PHASE GRACE CONDITIONS May 26 01:07:53.829: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC }] May 26 01:07:53.829: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:53.829: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:53.829: INFO: May 26 01:07:53.829: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 01:07:54.834: INFO: POD NODE PHASE GRACE CONDITIONS May 26 01:07:54.834: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:06:53 +0000 UTC }] May 26 01:07:54.834: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:54.834: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 01:07:13 +0000 UTC }] May 26 01:07:54.834: INFO: May 26 01:07:54.834: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3103 May 26 01:07:55.838: INFO: Scaling statefulset ss to 0 May 26 01:07:55.847: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 26 01:07:55.849: INFO: Deleting all statefulset in ns statefulset-3103 May 26 01:07:55.851: INFO: Scaling statefulset ss to 0 May 26 01:07:55.859: INFO: Waiting for statefulset status.replicas updated to 0 May 26 01:07:55.861: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:07:55.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3103" for this suite. • [SLOW TEST:62.557 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":286,"skipped":4727,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:07:55.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-b2241c2d-0a7a-4ded-bd78-0f22c236e090 in namespace container-probe-9445 May 26 01:08:00.028: INFO: Started pod busybox-b2241c2d-0a7a-4ded-bd78-0f22c236e090 in namespace container-probe-9445 STEP: checking the pod's current state and verifying that restartCount is present May 26 01:08:00.031: INFO: Initial restart count of pod busybox-b2241c2d-0a7a-4ded-bd78-0f22c236e090 is 0 May 26 01:08:48.257: INFO: Restart count of pod container-probe-9445/busybox-b2241c2d-0a7a-4ded-bd78-0f22c236e090 is now 1 (48.226050885s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:08:48.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9445" for this suite. • [SLOW TEST:52.381 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":287,"skipped":4793,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 01:08:48.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 01:08:49.871: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 01:08:51.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726052129, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726052129, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726052130, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726052129, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 01:08:54.921: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 01:09:07.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2993" for this suite. STEP: Destroying namespace "webhook-2993-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.954 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":288,"skipped":4806,"failed":0} SMay 26 01:09:07.285: INFO: Running AfterSuite actions on all nodes May 26 01:09:07.304: INFO: Running AfterSuite actions on node 1 May 26 01:09:07.304: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5439.472 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS