I0529 23:38:27.689483 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0529 23:38:27.689667 7 e2e.go:129] Starting e2e run "3941a5c5-b09d-49c3-a9d9-6b626e530a9f" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590795506 - Will randomize all specs Will run 288 of 5095 specs May 29 23:38:27.740: INFO: >>> kubeConfig: /root/.kube/config May 29 23:38:27.744: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 29 23:38:27.774: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 29 23:38:27.808: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 29 23:38:27.808: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 29 23:38:27.808: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 29 23:38:27.820: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 29 23:38:27.820: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 29 23:38:27.820: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 29 23:38:27.821: INFO: kube-apiserver version: v1.18.2 May 29 23:38:27.821: INFO: >>> kubeConfig: /root/.kube/config May 29 23:38:27.826: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:38:27.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe May 29 23:38:27.903: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-2d9a1076-b99f-4c2f-930c-8d31486e7e63 in namespace container-probe-8798 May 29 23:38:31.991: INFO: Started pod test-webserver-2d9a1076-b99f-4c2f-930c-8d31486e7e63 in namespace container-probe-8798 STEP: checking the pod's current state and verifying that restartCount is present May 29 23:38:31.994: INFO: Initial restart count of pod test-webserver-2d9a1076-b99f-4c2f-930c-8d31486e7e63 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:42:32.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8798" for this suite. • [SLOW TEST:244.837 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":1,"skipped":9,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:42:32.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 29 23:42:33.771: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 29 23:42:35.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392553, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392553, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392553, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392553, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 29 23:42:38.963: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:42:49.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7046" for this suite. STEP: Destroying namespace "webhook-7046-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.594 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":2,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:42:49.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-660b07fb-8f5e-455b-bc32-df8b0d126435 STEP: updating the pod May 29 23:42:57.932: INFO: Successfully updated pod "var-expansion-660b07fb-8f5e-455b-bc32-df8b0d126435" STEP: waiting for pod and container restart STEP: Failing liveness probe May 29 23:42:57.994: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-7527 PodName:var-expansion-660b07fb-8f5e-455b-bc32-df8b0d126435 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 23:42:57.994: INFO: >>> kubeConfig: /root/.kube/config I0529 23:42:58.039270 7 log.go:172] (0xc002b289a0) (0xc001c0a820) Create stream I0529 23:42:58.039308 7 log.go:172] (0xc002b289a0) (0xc001c0a820) Stream added, broadcasting: 1 I0529 23:42:58.041949 7 log.go:172] (0xc002b289a0) Reply frame received for 1 I0529 23:42:58.041999 7 log.go:172] (0xc002b289a0) (0xc0023d6280) Create stream I0529 23:42:58.042012 7 log.go:172] (0xc002b289a0) (0xc0023d6280) Stream added, broadcasting: 3 I0529 23:42:58.042669 7 log.go:172] (0xc002b289a0) Reply frame received for 3 I0529 23:42:58.042701 7 log.go:172] (0xc002b289a0) (0xc002da3e00) Create stream I0529 23:42:58.042709 7 log.go:172] (0xc002b289a0) (0xc002da3e00) Stream added, broadcasting: 5 I0529 23:42:58.043381 7 log.go:172] (0xc002b289a0) Reply frame received for 5 I0529 23:42:58.110135 7 log.go:172] (0xc002b289a0) Data frame received for 5 I0529 23:42:58.110183 7 log.go:172] (0xc002da3e00) (5) Data frame handling I0529 23:42:58.110205 7 log.go:172] (0xc002b289a0) Data frame received for 3 I0529 23:42:58.110219 7 log.go:172] (0xc0023d6280) (3) Data frame handling I0529 23:42:58.111310 7 log.go:172] (0xc002b289a0) Data frame received for 1 I0529 23:42:58.111336 7 log.go:172] (0xc001c0a820) (1) Data frame handling I0529 23:42:58.111345 7 log.go:172] (0xc001c0a820) (1) Data frame sent I0529 23:42:58.111359 7 log.go:172] (0xc002b289a0) (0xc001c0a820) Stream removed, broadcasting: 1 I0529 23:42:58.111376 7 log.go:172] (0xc002b289a0) Go away received I0529 23:42:58.111754 7 log.go:172] (0xc002b289a0) (0xc001c0a820) Stream removed, broadcasting: 1 I0529 23:42:58.111770 7 log.go:172] (0xc002b289a0) (0xc0023d6280) Stream removed, broadcasting: 3 I0529 23:42:58.111776 7 log.go:172] (0xc002b289a0) (0xc002da3e00) Stream removed, broadcasting: 5 May 29 23:42:58.111: INFO: Pod exec output: / STEP: Waiting for container to restart May 29 23:42:58.115: INFO: Container dapi-container, restarts: 0 May 29 23:43:08.120: INFO: Container dapi-container, restarts: 0 May 29 23:43:18.144: INFO: Container dapi-container, restarts: 0 May 29 23:43:28.120: INFO: Container dapi-container, restarts: 0 May 29 23:43:38.120: INFO: Container dapi-container, restarts: 1 May 29 23:43:38.120: INFO: Container has restart count: 1 STEP: Rewriting the file May 29 23:43:38.120: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-7527 PodName:var-expansion-660b07fb-8f5e-455b-bc32-df8b0d126435 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 23:43:38.120: INFO: >>> kubeConfig: /root/.kube/config I0529 23:43:38.171552 7 log.go:172] (0xc002f6e630) (0xc001b9c5a0) Create stream I0529 23:43:38.171592 7 log.go:172] (0xc002f6e630) (0xc001b9c5a0) Stream added, broadcasting: 1 I0529 23:43:38.174704 7 log.go:172] (0xc002f6e630) Reply frame received for 1 I0529 23:43:38.174756 7 log.go:172] (0xc002f6e630) (0xc001d5f040) Create stream I0529 23:43:38.174774 7 log.go:172] (0xc002f6e630) (0xc001d5f040) Stream added, broadcasting: 3 I0529 23:43:38.175934 7 log.go:172] (0xc002f6e630) Reply frame received for 3 I0529 23:43:38.175969 7 log.go:172] (0xc002f6e630) (0xc001c0aaa0) Create stream I0529 23:43:38.175994 7 log.go:172] (0xc002f6e630) (0xc001c0aaa0) Stream added, broadcasting: 5 I0529 23:43:38.176770 7 log.go:172] (0xc002f6e630) Reply frame received for 5 I0529 23:43:38.244871 7 log.go:172] (0xc002f6e630) Data frame received for 3 I0529 23:43:38.244916 7 log.go:172] (0xc001d5f040) (3) Data frame handling I0529 23:43:38.244942 7 log.go:172] (0xc002f6e630) Data frame received for 5 I0529 23:43:38.244960 7 log.go:172] (0xc001c0aaa0) (5) Data frame handling I0529 23:43:38.246975 7 log.go:172] (0xc002f6e630) Data frame received for 1 I0529 23:43:38.247012 7 log.go:172] (0xc001b9c5a0) (1) Data frame handling I0529 23:43:38.247036 7 log.go:172] (0xc001b9c5a0) (1) Data frame sent I0529 23:43:38.247056 7 log.go:172] (0xc002f6e630) (0xc001b9c5a0) Stream removed, broadcasting: 1 I0529 23:43:38.247080 7 log.go:172] (0xc002f6e630) Go away received I0529 23:43:38.247189 7 log.go:172] (0xc002f6e630) (0xc001b9c5a0) Stream removed, broadcasting: 1 I0529 23:43:38.247227 7 log.go:172] (0xc002f6e630) (0xc001d5f040) Stream removed, broadcasting: 3 I0529 23:43:38.247257 7 log.go:172] (0xc002f6e630) (0xc001c0aaa0) Stream removed, broadcasting: 5 May 29 23:43:38.247: INFO: Exec stderr: "" May 29 23:43:38.247: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 29 23:44:06.256: INFO: Container has restart count: 2 May 29 23:45:08.256: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 29 23:45:08.260: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-7527 PodName:var-expansion-660b07fb-8f5e-455b-bc32-df8b0d126435 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 23:45:08.260: INFO: >>> kubeConfig: /root/.kube/config I0529 23:45:08.319266 7 log.go:172] (0xc002700000) (0xc001b9cb40) Create stream I0529 23:45:08.319319 7 log.go:172] (0xc002700000) (0xc001b9cb40) Stream added, broadcasting: 1 I0529 23:45:08.321367 7 log.go:172] (0xc002700000) Reply frame received for 1 I0529 23:45:08.321407 7 log.go:172] (0xc002700000) (0xc001c9a000) Create stream I0529 23:45:08.321422 7 log.go:172] (0xc002700000) (0xc001c9a000) Stream added, broadcasting: 3 I0529 23:45:08.322570 7 log.go:172] (0xc002700000) Reply frame received for 3 I0529 23:45:08.322612 7 log.go:172] (0xc002700000) (0xc001c9a1e0) Create stream I0529 23:45:08.322630 7 log.go:172] (0xc002700000) (0xc001c9a1e0) Stream added, broadcasting: 5 I0529 23:45:08.323822 7 log.go:172] (0xc002700000) Reply frame received for 5 I0529 23:45:08.372885 7 log.go:172] (0xc002700000) Data frame received for 3 I0529 23:45:08.372909 7 log.go:172] (0xc001c9a000) (3) Data frame handling I0529 23:45:08.372937 7 log.go:172] (0xc002700000) Data frame received for 5 I0529 23:45:08.372961 7 log.go:172] (0xc001c9a1e0) (5) Data frame handling I0529 23:45:08.375334 7 log.go:172] (0xc002700000) Data frame received for 1 I0529 23:45:08.375353 7 log.go:172] (0xc001b9cb40) (1) Data frame handling I0529 23:45:08.375375 7 log.go:172] (0xc001b9cb40) (1) Data frame sent I0529 23:45:08.375394 7 log.go:172] (0xc002700000) (0xc001b9cb40) Stream removed, broadcasting: 1 I0529 23:45:08.375491 7 log.go:172] (0xc002700000) (0xc001b9cb40) Stream removed, broadcasting: 1 I0529 23:45:08.375504 7 log.go:172] (0xc002700000) (0xc001c9a000) Stream removed, broadcasting: 3 I0529 23:45:08.375534 7 log.go:172] (0xc002700000) Go away received I0529 23:45:08.375636 7 log.go:172] (0xc002700000) (0xc001c9a1e0) Stream removed, broadcasting: 5 May 29 23:45:08.379: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-7527 PodName:var-expansion-660b07fb-8f5e-455b-bc32-df8b0d126435 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 23:45:08.379: INFO: >>> kubeConfig: /root/.kube/config I0529 23:45:08.414063 7 log.go:172] (0xc002f6e370) (0xc001d5e500) Create stream I0529 23:45:08.414100 7 log.go:172] (0xc002f6e370) (0xc001d5e500) Stream added, broadcasting: 1 I0529 23:45:08.418326 7 log.go:172] (0xc002f6e370) Reply frame received for 1 I0529 23:45:08.418367 7 log.go:172] (0xc002f6e370) (0xc001c9a280) Create stream I0529 23:45:08.418383 7 log.go:172] (0xc002f6e370) (0xc001c9a280) Stream added, broadcasting: 3 I0529 23:45:08.419198 7 log.go:172] (0xc002f6e370) Reply frame received for 3 I0529 23:45:08.419231 7 log.go:172] (0xc002f6e370) (0xc002da3540) Create stream I0529 23:45:08.419244 7 log.go:172] (0xc002f6e370) (0xc002da3540) Stream added, broadcasting: 5 I0529 23:45:08.420064 7 log.go:172] (0xc002f6e370) Reply frame received for 5 I0529 23:45:08.468053 7 log.go:172] (0xc002f6e370) Data frame received for 5 I0529 23:45:08.468146 7 log.go:172] (0xc002da3540) (5) Data frame handling I0529 23:45:08.468184 7 log.go:172] (0xc002f6e370) Data frame received for 3 I0529 23:45:08.468202 7 log.go:172] (0xc001c9a280) (3) Data frame handling I0529 23:45:08.469923 7 log.go:172] (0xc002f6e370) Data frame received for 1 I0529 23:45:08.469966 7 log.go:172] (0xc001d5e500) (1) Data frame handling I0529 23:45:08.469997 7 log.go:172] (0xc001d5e500) (1) Data frame sent I0529 23:45:08.470018 7 log.go:172] (0xc002f6e370) (0xc001d5e500) Stream removed, broadcasting: 1 I0529 23:45:08.470046 7 log.go:172] (0xc002f6e370) Go away received I0529 23:45:08.470229 7 log.go:172] (0xc002f6e370) (0xc001d5e500) Stream removed, broadcasting: 1 I0529 23:45:08.470248 7 log.go:172] (0xc002f6e370) (0xc001c9a280) Stream removed, broadcasting: 3 I0529 23:45:08.470260 7 log.go:172] (0xc002f6e370) (0xc002da3540) Stream removed, broadcasting: 5 May 29 23:45:08.470: INFO: Deleting pod "var-expansion-660b07fb-8f5e-455b-bc32-df8b0d126435" in namespace "var-expansion-7527" May 29 23:45:08.476: INFO: Wait up to 5m0s for pod "var-expansion-660b07fb-8f5e-455b-bc32-df8b0d126435" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:45:42.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7527" for this suite. • [SLOW TEST:173.289 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":3,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:45:42.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-0fed336f-bb4b-41d8-a069-8879f23be72f STEP: Creating a pod to test consume configMaps May 29 23:45:42.668: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-496d3eff-0811-4313-a9c5-f60be378a258" in namespace "projected-221" to be "Succeeded or Failed" May 29 23:45:42.692: INFO: Pod "pod-projected-configmaps-496d3eff-0811-4313-a9c5-f60be378a258": Phase="Pending", Reason="", readiness=false. Elapsed: 24.202407ms May 29 23:45:44.696: INFO: Pod "pod-projected-configmaps-496d3eff-0811-4313-a9c5-f60be378a258": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028483858s May 29 23:45:46.701: INFO: Pod "pod-projected-configmaps-496d3eff-0811-4313-a9c5-f60be378a258": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033669837s STEP: Saw pod success May 29 23:45:46.701: INFO: Pod "pod-projected-configmaps-496d3eff-0811-4313-a9c5-f60be378a258" satisfied condition "Succeeded or Failed" May 29 23:45:46.705: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-496d3eff-0811-4313-a9c5-f60be378a258 container projected-configmap-volume-test: STEP: delete the pod May 29 23:45:46.753: INFO: Waiting for pod pod-projected-configmaps-496d3eff-0811-4313-a9c5-f60be378a258 to disappear May 29 23:45:46.766: INFO: Pod pod-projected-configmaps-496d3eff-0811-4313-a9c5-f60be378a258 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:45:46.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-221" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":4,"skipped":123,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:45:46.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 29 23:45:52.915: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6122 PodName:pod-sharedvolume-0cbc106f-81a0-417c-92e4-2f59c6bfa378 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 23:45:52.915: INFO: >>> kubeConfig: /root/.kube/config I0529 23:45:52.955479 7 log.go:172] (0xc002f6edc0) (0xc002134b40) Create stream I0529 23:45:52.955523 7 log.go:172] (0xc002f6edc0) (0xc002134b40) Stream added, broadcasting: 1 I0529 23:45:52.958053 7 log.go:172] (0xc002f6edc0) Reply frame received for 1 I0529 23:45:52.958113 7 log.go:172] (0xc002f6edc0) (0xc0018388c0) Create stream I0529 23:45:52.958140 7 log.go:172] (0xc002f6edc0) (0xc0018388c0) Stream added, broadcasting: 3 I0529 23:45:52.959305 7 log.go:172] (0xc002f6edc0) Reply frame received for 3 I0529 23:45:52.959367 7 log.go:172] (0xc002f6edc0) (0xc001838960) Create stream I0529 23:45:52.959384 7 log.go:172] (0xc002f6edc0) (0xc001838960) Stream added, broadcasting: 5 I0529 23:45:52.960373 7 log.go:172] (0xc002f6edc0) Reply frame received for 5 I0529 23:45:53.048468 7 log.go:172] (0xc002f6edc0) Data frame received for 5 I0529 23:45:53.048505 7 log.go:172] (0xc001838960) (5) Data frame handling I0529 23:45:53.048525 7 log.go:172] (0xc002f6edc0) Data frame received for 3 I0529 23:45:53.048534 7 log.go:172] (0xc0018388c0) (3) Data frame handling I0529 23:45:53.048543 7 log.go:172] (0xc0018388c0) (3) Data frame sent I0529 23:45:53.048552 7 log.go:172] (0xc002f6edc0) Data frame received for 3 I0529 23:45:53.048564 7 log.go:172] (0xc0018388c0) (3) Data frame handling I0529 23:45:53.050061 7 log.go:172] (0xc002f6edc0) Data frame received for 1 I0529 23:45:53.050080 7 log.go:172] (0xc002134b40) (1) Data frame handling I0529 23:45:53.050096 7 log.go:172] (0xc002134b40) (1) Data frame sent I0529 23:45:53.050109 7 log.go:172] (0xc002f6edc0) (0xc002134b40) Stream removed, broadcasting: 1 I0529 23:45:53.050177 7 log.go:172] (0xc002f6edc0) Go away received I0529 23:45:53.050247 7 log.go:172] (0xc002f6edc0) (0xc002134b40) Stream removed, broadcasting: 1 I0529 23:45:53.050273 7 log.go:172] (0xc002f6edc0) (0xc0018388c0) Stream removed, broadcasting: 3 I0529 23:45:53.050294 7 log.go:172] (0xc002f6edc0) (0xc001838960) Stream removed, broadcasting: 5 May 29 23:45:53.050: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:45:53.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6122" for this suite. • [SLOW TEST:6.230 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":5,"skipped":143,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:45:53.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:45:57.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5330" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":6,"skipped":172,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:45:57.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 29 23:45:57.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52cc2dfb-5657-4329-8118-687a87547f00" in namespace "projected-4601" to be "Succeeded or Failed" May 29 23:45:57.289: INFO: Pod "downwardapi-volume-52cc2dfb-5657-4329-8118-687a87547f00": Phase="Pending", Reason="", readiness=false. Elapsed: 22.454222ms May 29 23:45:59.294: INFO: Pod "downwardapi-volume-52cc2dfb-5657-4329-8118-687a87547f00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027169353s May 29 23:46:01.298: INFO: Pod "downwardapi-volume-52cc2dfb-5657-4329-8118-687a87547f00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031479351s STEP: Saw pod success May 29 23:46:01.299: INFO: Pod "downwardapi-volume-52cc2dfb-5657-4329-8118-687a87547f00" satisfied condition "Succeeded or Failed" May 29 23:46:01.301: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-52cc2dfb-5657-4329-8118-687a87547f00 container client-container: STEP: delete the pod May 29 23:46:01.365: INFO: Waiting for pod downwardapi-volume-52cc2dfb-5657-4329-8118-687a87547f00 to disappear May 29 23:46:01.469: INFO: Pod downwardapi-volume-52cc2dfb-5657-4329-8118-687a87547f00 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:46:01.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4601" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":7,"skipped":175,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:46:01.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 29 23:46:02.110: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 29 23:46:04.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392762, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392762, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392762, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392762, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 29 23:46:07.290: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:46:07.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5558" for this suite. STEP: Destroying namespace "webhook-5558-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.946 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":8,"skipped":189,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:46:07.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6407 STEP: creating a selector STEP: Creating the service pods in kubernetes May 29 23:46:07.500: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 29 23:46:07.558: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 29 23:46:09.568: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 29 23:46:11.562: INFO: The status of Pod netserver-0 is Running (Ready = false) May 29 23:46:13.562: INFO: The status of Pod netserver-0 is Running (Ready = false) May 29 23:46:15.562: INFO: The status of Pod netserver-0 is Running (Ready = false) May 29 23:46:17.561: INFO: The status of Pod netserver-0 is Running (Ready = false) May 29 23:46:19.562: INFO: The status of Pod netserver-0 is Running (Ready = false) May 29 23:46:21.562: INFO: The status of Pod netserver-0 is Running (Ready = false) May 29 23:46:23.563: INFO: The status of Pod netserver-0 is Running (Ready = false) May 29 23:46:25.562: INFO: The status of Pod netserver-0 is Running (Ready = true) May 29 23:46:25.568: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 29 23:46:29.610: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.34:8080/dial?request=hostname&protocol=udp&host=10.244.1.38&port=8081&tries=1'] Namespace:pod-network-test-6407 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 23:46:29.610: INFO: >>> kubeConfig: /root/.kube/config I0529 23:46:29.646463 7 log.go:172] (0xc002a400b0) (0xc001838640) Create stream I0529 23:46:29.646485 7 log.go:172] (0xc002a400b0) (0xc001838640) Stream added, broadcasting: 1 I0529 23:46:29.649021 7 log.go:172] (0xc002a400b0) Reply frame received for 1 I0529 23:46:29.649280 7 log.go:172] (0xc002a400b0) (0xc0018a20a0) Create stream I0529 23:46:29.649345 7 log.go:172] (0xc002a400b0) (0xc0018a20a0) Stream added, broadcasting: 3 I0529 23:46:29.650430 7 log.go:172] (0xc002a400b0) Reply frame received for 3 I0529 23:46:29.650475 7 log.go:172] (0xc002a400b0) (0xc0018386e0) Create stream I0529 23:46:29.650488 7 log.go:172] (0xc002a400b0) (0xc0018386e0) Stream added, broadcasting: 5 I0529 23:46:29.651326 7 log.go:172] (0xc002a400b0) Reply frame received for 5 I0529 23:46:29.834798 7 log.go:172] (0xc002a400b0) Data frame received for 3 I0529 23:46:29.834826 7 log.go:172] (0xc0018a20a0) (3) Data frame handling I0529 23:46:29.834843 7 log.go:172] (0xc0018a20a0) (3) Data frame sent I0529 23:46:29.835022 7 log.go:172] (0xc002a400b0) Data frame received for 3 I0529 23:46:29.835037 7 log.go:172] (0xc0018a20a0) (3) Data frame handling I0529 23:46:29.835404 7 log.go:172] (0xc002a400b0) Data frame received for 5 I0529 23:46:29.835423 7 log.go:172] (0xc0018386e0) (5) Data frame handling I0529 23:46:29.837761 7 log.go:172] (0xc002a400b0) Data frame received for 1 I0529 23:46:29.837798 7 log.go:172] (0xc001838640) (1) Data frame handling I0529 23:46:29.837816 7 log.go:172] (0xc001838640) (1) Data frame sent I0529 23:46:29.837828 7 log.go:172] (0xc002a400b0) (0xc001838640) Stream removed, broadcasting: 1 I0529 23:46:29.837905 7 log.go:172] (0xc002a400b0) (0xc001838640) Stream removed, broadcasting: 1 I0529 23:46:29.837913 7 log.go:172] (0xc002a400b0) (0xc0018a20a0) Stream removed, broadcasting: 3 I0529 23:46:29.837985 7 log.go:172] (0xc002a400b0) Go away received I0529 23:46:29.838085 7 log.go:172] (0xc002a400b0) (0xc0018386e0) Stream removed, broadcasting: 5 May 29 23:46:29.838: INFO: Waiting for responses: map[] May 29 23:46:29.876: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.34:8080/dial?request=hostname&protocol=udp&host=10.244.2.33&port=8081&tries=1'] Namespace:pod-network-test-6407 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 23:46:29.876: INFO: >>> kubeConfig: /root/.kube/config I0529 23:46:29.906850 7 log.go:172] (0xc002dadad0) (0xc0021346e0) Create stream I0529 23:46:29.906890 7 log.go:172] (0xc002dadad0) (0xc0021346e0) Stream added, broadcasting: 1 I0529 23:46:29.908748 7 log.go:172] (0xc002dadad0) Reply frame received for 1 I0529 23:46:29.908776 7 log.go:172] (0xc002dadad0) (0xc0018a2140) Create stream I0529 23:46:29.908789 7 log.go:172] (0xc002dadad0) (0xc0018a2140) Stream added, broadcasting: 3 I0529 23:46:29.910003 7 log.go:172] (0xc002dadad0) Reply frame received for 3 I0529 23:46:29.910062 7 log.go:172] (0xc002dadad0) (0xc001c0a0a0) Create stream I0529 23:46:29.910083 7 log.go:172] (0xc002dadad0) (0xc001c0a0a0) Stream added, broadcasting: 5 I0529 23:46:29.910872 7 log.go:172] (0xc002dadad0) Reply frame received for 5 I0529 23:46:29.969904 7 log.go:172] (0xc002dadad0) Data frame received for 3 I0529 23:46:29.969927 7 log.go:172] (0xc0018a2140) (3) Data frame handling I0529 23:46:29.969938 7 log.go:172] (0xc0018a2140) (3) Data frame sent I0529 23:46:29.972804 7 log.go:172] (0xc002dadad0) Data frame received for 5 I0529 23:46:29.972842 7 log.go:172] (0xc001c0a0a0) (5) Data frame handling I0529 23:46:29.972874 7 log.go:172] (0xc002dadad0) Data frame received for 3 I0529 23:46:29.972894 7 log.go:172] (0xc0018a2140) (3) Data frame handling I0529 23:46:29.973070 7 log.go:172] (0xc002dadad0) Data frame received for 1 I0529 23:46:29.973094 7 log.go:172] (0xc0021346e0) (1) Data frame handling I0529 23:46:29.973104 7 log.go:172] (0xc0021346e0) (1) Data frame sent I0529 23:46:29.973275 7 log.go:172] (0xc002dadad0) (0xc0021346e0) Stream removed, broadcasting: 1 I0529 23:46:29.973343 7 log.go:172] (0xc002dadad0) Go away received I0529 23:46:29.973367 7 log.go:172] (0xc002dadad0) (0xc0021346e0) Stream removed, broadcasting: 1 I0529 23:46:29.973391 7 log.go:172] (0xc002dadad0) (0xc0018a2140) Stream removed, broadcasting: 3 I0529 23:46:29.973400 7 log.go:172] (0xc002dadad0) (0xc001c0a0a0) Stream removed, broadcasting: 5 May 29 23:46:29.973: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:46:29.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6407" for this suite. • [SLOW TEST:22.556 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":9,"skipped":197,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:46:29.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 29 23:46:30.930: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 29 23:46:33.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392790, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392790, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392791, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392790, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 29 23:46:36.081: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 29 23:46:36.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4099-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:46:37.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4999" for this suite. STEP: Destroying namespace "webhook-4999-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.893 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":10,"skipped":206,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:46:37.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 29 23:48:37.985: INFO: Deleting pod "var-expansion-72137ad1-2620-4811-b381-6bf0baea2fff" in namespace "var-expansion-1507" May 29 23:48:37.990: INFO: Wait up to 5m0s for pod "var-expansion-72137ad1-2620-4811-b381-6bf0baea2fff" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:48:40.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1507" for this suite. • [SLOW TEST:122.151 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":11,"skipped":226,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:48:40.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-7f817177-3d59-48d3-b84e-bbc69618f28d STEP: Creating a pod to test consume secrets May 29 23:48:40.148: INFO: Waiting up to 5m0s for pod "pod-secrets-2d77e759-b4b6-446f-8732-0529633f0b12" in namespace "secrets-892" to be "Succeeded or Failed" May 29 23:48:40.152: INFO: Pod "pod-secrets-2d77e759-b4b6-446f-8732-0529633f0b12": Phase="Pending", Reason="", readiness=false. Elapsed: 3.304032ms May 29 23:48:42.156: INFO: Pod "pod-secrets-2d77e759-b4b6-446f-8732-0529633f0b12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007385418s May 29 23:48:44.160: INFO: Pod "pod-secrets-2d77e759-b4b6-446f-8732-0529633f0b12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01132561s STEP: Saw pod success May 29 23:48:44.160: INFO: Pod "pod-secrets-2d77e759-b4b6-446f-8732-0529633f0b12" satisfied condition "Succeeded or Failed" May 29 23:48:44.162: INFO: Trying to get logs from node latest-worker pod pod-secrets-2d77e759-b4b6-446f-8732-0529633f0b12 container secret-volume-test: STEP: delete the pod May 29 23:48:44.203: INFO: Waiting for pod pod-secrets-2d77e759-b4b6-446f-8732-0529633f0b12 to disappear May 29 23:48:44.248: INFO: Pod pod-secrets-2d77e759-b4b6-446f-8732-0529633f0b12 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:48:44.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-892" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":12,"skipped":229,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:48:44.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 29 23:48:44.397: INFO: Waiting up to 5m0s for pod "pod-d9310c02-36c3-4fd6-8f96-8a9452680a46" in namespace "emptydir-4214" to be "Succeeded or Failed" May 29 23:48:44.405: INFO: Pod "pod-d9310c02-36c3-4fd6-8f96-8a9452680a46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118046ms May 29 23:48:46.410: INFO: Pod "pod-d9310c02-36c3-4fd6-8f96-8a9452680a46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012489529s May 29 23:48:48.414: INFO: Pod "pod-d9310c02-36c3-4fd6-8f96-8a9452680a46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017023696s STEP: Saw pod success May 29 23:48:48.414: INFO: Pod "pod-d9310c02-36c3-4fd6-8f96-8a9452680a46" satisfied condition "Succeeded or Failed" May 29 23:48:48.417: INFO: Trying to get logs from node latest-worker pod pod-d9310c02-36c3-4fd6-8f96-8a9452680a46 container test-container: STEP: delete the pod May 29 23:48:48.452: INFO: Waiting for pod pod-d9310c02-36c3-4fd6-8f96-8a9452680a46 to disappear May 29 23:48:48.471: INFO: Pod pod-d9310c02-36c3-4fd6-8f96-8a9452680a46 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:48:48.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4214" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":13,"skipped":233,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:48:48.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:48:52.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6065" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":14,"skipped":236,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:48:52.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 29 23:48:52.837: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5f03781-2235-4caa-8b85-47668cc8ed3d" in namespace "projected-7865" to be "Succeeded or Failed" May 29 23:48:52.872: INFO: Pod "downwardapi-volume-b5f03781-2235-4caa-8b85-47668cc8ed3d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.316179ms May 29 23:48:54.876: INFO: Pod "downwardapi-volume-b5f03781-2235-4caa-8b85-47668cc8ed3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039631689s May 29 23:48:56.880: INFO: Pod "downwardapi-volume-b5f03781-2235-4caa-8b85-47668cc8ed3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043600744s STEP: Saw pod success May 29 23:48:56.880: INFO: Pod "downwardapi-volume-b5f03781-2235-4caa-8b85-47668cc8ed3d" satisfied condition "Succeeded or Failed" May 29 23:48:56.887: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b5f03781-2235-4caa-8b85-47668cc8ed3d container client-container: STEP: delete the pod May 29 23:48:56.947: INFO: Waiting for pod downwardapi-volume-b5f03781-2235-4caa-8b85-47668cc8ed3d to disappear May 29 23:48:56.982: INFO: Pod downwardapi-volume-b5f03781-2235-4caa-8b85-47668cc8ed3d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:48:56.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7865" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":15,"skipped":238,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:48:56.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-bfe74216-0aae-4c64-bf1d-c9e55dcab669 STEP: Creating a pod to test consume secrets May 29 23:48:57.109: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b991ddd6-53d8-4326-84a5-f11a0d3b8c7b" in namespace "projected-1623" to be "Succeeded or Failed" May 29 23:48:57.112: INFO: Pod "pod-projected-secrets-b991ddd6-53d8-4326-84a5-f11a0d3b8c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.938617ms May 29 23:48:59.117: INFO: Pod "pod-projected-secrets-b991ddd6-53d8-4326-84a5-f11a0d3b8c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008051422s May 29 23:49:01.121: INFO: Pod "pod-projected-secrets-b991ddd6-53d8-4326-84a5-f11a0d3b8c7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012883928s STEP: Saw pod success May 29 23:49:01.121: INFO: Pod "pod-projected-secrets-b991ddd6-53d8-4326-84a5-f11a0d3b8c7b" satisfied condition "Succeeded or Failed" May 29 23:49:01.124: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-b991ddd6-53d8-4326-84a5-f11a0d3b8c7b container projected-secret-volume-test: STEP: delete the pod May 29 23:49:01.311: INFO: Waiting for pod pod-projected-secrets-b991ddd6-53d8-4326-84a5-f11a0d3b8c7b to disappear May 29 23:49:01.390: INFO: Pod pod-projected-secrets-b991ddd6-53d8-4326-84a5-f11a0d3b8c7b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:49:01.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1623" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":16,"skipped":242,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:49:01.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 29 23:49:01.562: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-503 /api/v1/namespaces/watch-503/configmaps/e2e-watch-test-label-changed 4ab5e518-7094-4242-9e76-f88f9f19ecc1 8728245 0 2020-05-29 23:49:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-29 23:49:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 29 23:49:01.584: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-503 /api/v1/namespaces/watch-503/configmaps/e2e-watch-test-label-changed 4ab5e518-7094-4242-9e76-f88f9f19ecc1 8728246 0 2020-05-29 23:49:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-29 23:49:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 29 23:49:01.584: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-503 /api/v1/namespaces/watch-503/configmaps/e2e-watch-test-label-changed 4ab5e518-7094-4242-9e76-f88f9f19ecc1 8728247 0 2020-05-29 23:49:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-29 23:49:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 29 23:49:11.616: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-503 /api/v1/namespaces/watch-503/configmaps/e2e-watch-test-label-changed 4ab5e518-7094-4242-9e76-f88f9f19ecc1 8728296 0 2020-05-29 23:49:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-29 23:49:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 29 23:49:11.616: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-503 /api/v1/namespaces/watch-503/configmaps/e2e-watch-test-label-changed 4ab5e518-7094-4242-9e76-f88f9f19ecc1 8728297 0 2020-05-29 23:49:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-29 23:49:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 29 23:49:11.616: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-503 /api/v1/namespaces/watch-503/configmaps/e2e-watch-test-label-changed 4ab5e518-7094-4242-9e76-f88f9f19ecc1 8728298 0 2020-05-29 23:49:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-29 23:49:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:49:11.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-503" for this suite. • [SLOW TEST:10.216 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":17,"skipped":247,"failed":0} [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:49:11.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 29 23:49:11.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6335' May 29 23:49:15.233: INFO: stderr: "" May 29 23:49:15.233: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 29 23:49:15.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6335' May 29 23:49:15.380: INFO: stderr: "" May 29 23:49:15.381: INFO: stdout: "update-demo-nautilus-4l6bm update-demo-nautilus-s679b " May 29 23:49:15.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4l6bm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6335' May 29 23:49:15.494: INFO: stderr: "" May 29 23:49:15.494: INFO: stdout: "" May 29 23:49:15.494: INFO: update-demo-nautilus-4l6bm is created but not running May 29 23:49:20.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6335' May 29 23:49:20.658: INFO: stderr: "" May 29 23:49:20.658: INFO: stdout: "update-demo-nautilus-4l6bm update-demo-nautilus-s679b " May 29 23:49:20.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4l6bm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6335' May 29 23:49:20.767: INFO: stderr: "" May 29 23:49:20.767: INFO: stdout: "true" May 29 23:49:20.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4l6bm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6335' May 29 23:49:20.866: INFO: stderr: "" May 29 23:49:20.866: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 29 23:49:20.866: INFO: validating pod update-demo-nautilus-4l6bm May 29 23:49:20.883: INFO: got data: { "image": "nautilus.jpg" } May 29 23:49:20.883: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 29 23:49:20.883: INFO: update-demo-nautilus-4l6bm is verified up and running May 29 23:49:20.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s679b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6335' May 29 23:49:20.983: INFO: stderr: "" May 29 23:49:20.983: INFO: stdout: "true" May 29 23:49:20.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s679b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6335' May 29 23:49:21.101: INFO: stderr: "" May 29 23:49:21.101: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 29 23:49:21.101: INFO: validating pod update-demo-nautilus-s679b May 29 23:49:21.116: INFO: got data: { "image": "nautilus.jpg" } May 29 23:49:21.116: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 29 23:49:21.116: INFO: update-demo-nautilus-s679b is verified up and running STEP: using delete to clean up resources May 29 23:49:21.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6335' May 29 23:49:21.226: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 29 23:49:21.226: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 29 23:49:21.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6335' May 29 23:49:21.327: INFO: stderr: "No resources found in kubectl-6335 namespace.\n" May 29 23:49:21.327: INFO: stdout: "" May 29 23:49:21.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6335 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 29 23:49:21.438: INFO: stderr: "" May 29 23:49:21.438: INFO: stdout: "update-demo-nautilus-4l6bm\nupdate-demo-nautilus-s679b\n" May 29 23:49:21.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6335' May 29 23:49:22.298: INFO: stderr: "No resources found in kubectl-6335 namespace.\n" May 29 23:49:22.298: INFO: stdout: "" May 29 23:49:22.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6335 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 29 23:49:22.433: INFO: stderr: "" May 29 23:49:22.433: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:49:22.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6335" for this suite. • [SLOW TEST:10.785 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":18,"skipped":247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:49:22.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 29 23:49:22.566: INFO: Waiting up to 5m0s for pod "var-expansion-068a9e08-4efc-43d1-8e67-70045bc36a2d" in namespace "var-expansion-1689" to be "Succeeded or Failed" May 29 23:49:22.570: INFO: Pod "var-expansion-068a9e08-4efc-43d1-8e67-70045bc36a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.533389ms May 29 23:49:24.674: INFO: Pod "var-expansion-068a9e08-4efc-43d1-8e67-70045bc36a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108012072s May 29 23:49:26.679: INFO: Pod "var-expansion-068a9e08-4efc-43d1-8e67-70045bc36a2d": Phase="Running", Reason="", readiness=true. Elapsed: 4.112909135s May 29 23:49:28.684: INFO: Pod "var-expansion-068a9e08-4efc-43d1-8e67-70045bc36a2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117743426s STEP: Saw pod success May 29 23:49:28.684: INFO: Pod "var-expansion-068a9e08-4efc-43d1-8e67-70045bc36a2d" satisfied condition "Succeeded or Failed" May 29 23:49:28.688: INFO: Trying to get logs from node latest-worker pod var-expansion-068a9e08-4efc-43d1-8e67-70045bc36a2d container dapi-container: STEP: delete the pod May 29 23:49:28.722: INFO: Waiting for pod var-expansion-068a9e08-4efc-43d1-8e67-70045bc36a2d to disappear May 29 23:49:28.729: INFO: Pod var-expansion-068a9e08-4efc-43d1-8e67-70045bc36a2d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:49:28.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1689" for this suite. • [SLOW TEST:6.296 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":19,"skipped":289,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:49:28.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 29 23:49:29.572: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 29 23:49:31.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392969, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392969, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392969, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726392969, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 29 23:49:34.621: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:49:34.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7540" for this suite. STEP: Destroying namespace "webhook-7540-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.419 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":20,"skipped":291,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:49:35.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 29 23:49:35.674: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:49:35.723: INFO: Number of nodes with available pods: 0 May 29 23:49:35.723: INFO: Node latest-worker is running more than one daemon pod May 29 23:49:36.728: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:49:36.732: INFO: Number of nodes with available pods: 0 May 29 23:49:36.732: INFO: Node latest-worker is running more than one daemon pod May 29 23:49:37.813: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:49:37.817: INFO: Number of nodes with available pods: 0 May 29 23:49:37.817: INFO: Node latest-worker is running more than one daemon pod May 29 23:49:38.909: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:49:39.005: INFO: Number of nodes with available pods: 0 May 29 23:49:39.005: INFO: Node latest-worker is running more than one daemon pod May 29 23:49:39.728: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:49:39.732: INFO: Number of nodes with available pods: 1 May 29 23:49:39.732: INFO: Node latest-worker is running more than one daemon pod May 29 23:49:40.730: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:49:40.734: INFO: Number of nodes with available pods: 2 May 29 23:49:40.734: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 29 23:49:40.819: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:49:40.851: INFO: Number of nodes with available pods: 1 May 29 23:49:40.851: INFO: Node latest-worker2 is running more than one daemon pod May 29 23:49:41.856: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:49:41.860: INFO: Number of nodes with available pods: 1 May 29 23:49:41.860: INFO: Node latest-worker2 is running more than one daemon pod May 29 23:49:42.856: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:49:42.860: INFO: Number of nodes with available pods: 1 May 29 23:49:42.860: INFO: Node latest-worker2 is running more than one daemon pod May 29 23:49:43.856: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:49:43.859: INFO: Number of nodes with available pods: 1 May 29 23:49:43.859: INFO: Node latest-worker2 is running more than one daemon pod May 29 23:49:44.856: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:49:44.860: INFO: Number of nodes with available pods: 2 May 29 23:49:44.860: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2618, will wait for the garbage collector to delete the pods May 29 23:49:44.925: INFO: Deleting DaemonSet.extensions daemon-set took: 7.412349ms May 29 23:49:45.326: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.349778ms May 29 23:49:55.330: INFO: Number of nodes with available pods: 0 May 29 23:49:55.330: INFO: Number of running nodes: 0, number of available pods: 0 May 29 23:49:55.344: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2618/daemonsets","resourceVersion":"8728633"},"items":null} May 29 23:49:55.348: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2618/pods","resourceVersion":"8728633"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:49:55.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2618" for this suite. • [SLOW TEST:20.210 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":21,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:49:55.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 29 23:49:59.992: INFO: Successfully updated pod "pod-update-ada62bee-73fa-4f8f-a55e-1c2649927cdb" STEP: verifying the updated pod is in kubernetes May 29 23:50:00.041: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:50:00.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7143" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":22,"skipped":321,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:50:00.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-d96384d6-6831-4786-b0a0-d3447cb95fb6 in namespace container-probe-3723 May 29 23:50:04.278: INFO: Started pod busybox-d96384d6-6831-4786-b0a0-d3447cb95fb6 in namespace container-probe-3723 STEP: checking the pod's current state and verifying that restartCount is present May 29 23:50:04.282: INFO: Initial restart count of pod busybox-d96384d6-6831-4786-b0a0-d3447cb95fb6 is 0 May 29 23:50:54.488: INFO: Restart count of pod container-probe-3723/busybox-d96384d6-6831-4786-b0a0-d3447cb95fb6 is now 1 (50.205618484s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:50:54.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3723" for this suite. • [SLOW TEST:54.428 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":23,"skipped":324,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:50:54.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 29 23:50:54.647: INFO: Waiting up to 5m0s for pod "pod-c4033f56-1411-4c0c-a40c-1905eb727cc6" in namespace "emptydir-9220" to be "Succeeded or Failed" May 29 23:50:54.687: INFO: Pod "pod-c4033f56-1411-4c0c-a40c-1905eb727cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 39.740448ms May 29 23:50:56.691: INFO: Pod "pod-c4033f56-1411-4c0c-a40c-1905eb727cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044245647s May 29 23:50:58.696: INFO: Pod "pod-c4033f56-1411-4c0c-a40c-1905eb727cc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048988857s STEP: Saw pod success May 29 23:50:58.696: INFO: Pod "pod-c4033f56-1411-4c0c-a40c-1905eb727cc6" satisfied condition "Succeeded or Failed" May 29 23:50:58.699: INFO: Trying to get logs from node latest-worker pod pod-c4033f56-1411-4c0c-a40c-1905eb727cc6 container test-container: STEP: delete the pod May 29 23:50:58.768: INFO: Waiting for pod pod-c4033f56-1411-4c0c-a40c-1905eb727cc6 to disappear May 29 23:50:58.800: INFO: Pod pod-c4033f56-1411-4c0c-a40c-1905eb727cc6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:50:58.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9220" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":24,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:50:58.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 29 23:50:58.863: INFO: namespace kubectl-2360 May 29 23:50:58.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2360' May 29 23:50:59.125: INFO: stderr: "" May 29 23:50:59.125: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 29 23:51:00.130: INFO: Selector matched 1 pods for map[app:agnhost] May 29 23:51:00.130: INFO: Found 0 / 1 May 29 23:51:01.131: INFO: Selector matched 1 pods for map[app:agnhost] May 29 23:51:01.131: INFO: Found 0 / 1 May 29 23:51:02.131: INFO: Selector matched 1 pods for map[app:agnhost] May 29 23:51:02.131: INFO: Found 1 / 1 May 29 23:51:02.131: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 29 23:51:02.134: INFO: Selector matched 1 pods for map[app:agnhost] May 29 23:51:02.134: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 29 23:51:02.134: INFO: wait on agnhost-master startup in kubectl-2360 May 29 23:51:02.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-vh2dx agnhost-master --namespace=kubectl-2360' May 29 23:51:02.271: INFO: stderr: "" May 29 23:51:02.271: INFO: stdout: "Paused\n" STEP: exposing RC May 29 23:51:02.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2360' May 29 23:51:02.474: INFO: stderr: "" May 29 23:51:02.474: INFO: stdout: "service/rm2 exposed\n" May 29 23:51:02.506: INFO: Service rm2 in namespace kubectl-2360 found. STEP: exposing service May 29 23:51:04.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2360' May 29 23:51:04.669: INFO: stderr: "" May 29 23:51:04.669: INFO: stdout: "service/rm3 exposed\n" May 29 23:51:04.679: INFO: Service rm3 in namespace kubectl-2360 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:51:06.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2360" for this suite. • [SLOW TEST:7.882 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":25,"skipped":375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:51:06.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 29 23:51:06.752: INFO: PodSpec: initContainers in spec.initContainers May 29 23:52:00.351: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f001b86e-cdb1-46bc-adb3-cb52699e55f9", GenerateName:"", Namespace:"init-container-3261", SelfLink:"/api/v1/namespaces/init-container-3261/pods/pod-init-f001b86e-cdb1-46bc-adb3-cb52699e55f9", UID:"18c0e6c5-e4eb-40d8-8c2f-2b8b60dddcbb", ResourceVersion:"8729183", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726393066, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"752080879"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002341760), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0023417c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0023417e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002341800)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-svqwk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001d2ee80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-svqwk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-svqwk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-svqwk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00082a428), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a14850), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00082a820)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00082a870)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00082a878), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00082a87c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393066, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393066, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393066, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393066, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.2.41", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.41"}}, StartTime:(*v1.Time)(0xc002341820), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a14930)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a14a10)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://6717ad9f2b41bc2b3e3b56f7ce8322a03a291c5461bd8a64a980c37a71a8733d", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0023418a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002341860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00082a92f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:52:00.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3261" for this suite. • [SLOW TEST:53.733 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":26,"skipped":403,"failed":0} SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:52:00.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-eccfad1b-b786-438d-bfe0-009adca6de65 STEP: Creating a pod to test consume secrets May 29 23:52:00.559: INFO: Waiting up to 5m0s for pod "pod-secrets-782e68b3-bc40-4415-a0f7-0d3bccc3c5b4" in namespace "secrets-2803" to be "Succeeded or Failed" May 29 23:52:00.612: INFO: Pod "pod-secrets-782e68b3-bc40-4415-a0f7-0d3bccc3c5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 52.955201ms May 29 23:52:02.667: INFO: Pod "pod-secrets-782e68b3-bc40-4415-a0f7-0d3bccc3c5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108503598s May 29 23:52:04.671: INFO: Pod "pod-secrets-782e68b3-bc40-4415-a0f7-0d3bccc3c5b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112773848s STEP: Saw pod success May 29 23:52:04.671: INFO: Pod "pod-secrets-782e68b3-bc40-4415-a0f7-0d3bccc3c5b4" satisfied condition "Succeeded or Failed" May 29 23:52:04.674: INFO: Trying to get logs from node latest-worker pod pod-secrets-782e68b3-bc40-4415-a0f7-0d3bccc3c5b4 container secret-volume-test: STEP: delete the pod May 29 23:52:04.765: INFO: Waiting for pod pod-secrets-782e68b3-bc40-4415-a0f7-0d3bccc3c5b4 to disappear May 29 23:52:04.768: INFO: Pod pod-secrets-782e68b3-bc40-4415-a0f7-0d3bccc3c5b4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:52:04.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2803" for this suite. STEP: Destroying namespace "secret-namespace-6555" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":27,"skipped":405,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:52:04.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-01042d01-e89c-4a33-ab66-d74a84a751ee STEP: Creating a pod to test consume secrets May 29 23:52:04.964: INFO: Waiting up to 5m0s for pod "pod-secrets-c838f076-5918-4696-8a8e-292695f3e3ad" in namespace "secrets-2332" to be "Succeeded or Failed" May 29 23:52:04.981: INFO: Pod "pod-secrets-c838f076-5918-4696-8a8e-292695f3e3ad": Phase="Pending", Reason="", readiness=false. Elapsed: 17.081802ms May 29 23:52:07.023: INFO: Pod "pod-secrets-c838f076-5918-4696-8a8e-292695f3e3ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059337509s May 29 23:52:09.027: INFO: Pod "pod-secrets-c838f076-5918-4696-8a8e-292695f3e3ad": Phase="Running", Reason="", readiness=true. Elapsed: 4.063776255s May 29 23:52:11.033: INFO: Pod "pod-secrets-c838f076-5918-4696-8a8e-292695f3e3ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068805927s STEP: Saw pod success May 29 23:52:11.033: INFO: Pod "pod-secrets-c838f076-5918-4696-8a8e-292695f3e3ad" satisfied condition "Succeeded or Failed" May 29 23:52:11.036: INFO: Trying to get logs from node latest-worker pod pod-secrets-c838f076-5918-4696-8a8e-292695f3e3ad container secret-volume-test: STEP: delete the pod May 29 23:52:11.074: INFO: Waiting for pod pod-secrets-c838f076-5918-4696-8a8e-292695f3e3ad to disappear May 29 23:52:11.096: INFO: Pod pod-secrets-c838f076-5918-4696-8a8e-292695f3e3ad no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:52:11.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2332" for this suite. • [SLOW TEST:6.348 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":28,"skipped":420,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:52:11.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-0631f564-fb04-47fe-9bd6-e0c5e8033711 STEP: Creating a pod to test consume configMaps May 29 23:52:11.272: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-218fe181-c35d-41df-8658-ebb53a62efeb" in namespace "projected-3444" to be "Succeeded or Failed" May 29 23:52:11.275: INFO: Pod "pod-projected-configmaps-218fe181-c35d-41df-8658-ebb53a62efeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.897522ms May 29 23:52:13.408: INFO: Pod "pod-projected-configmaps-218fe181-c35d-41df-8658-ebb53a62efeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135081587s May 29 23:52:15.412: INFO: Pod "pod-projected-configmaps-218fe181-c35d-41df-8658-ebb53a62efeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139384841s STEP: Saw pod success May 29 23:52:15.412: INFO: Pod "pod-projected-configmaps-218fe181-c35d-41df-8658-ebb53a62efeb" satisfied condition "Succeeded or Failed" May 29 23:52:15.414: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-218fe181-c35d-41df-8658-ebb53a62efeb container projected-configmap-volume-test: STEP: delete the pod May 29 23:52:15.676: INFO: Waiting for pod pod-projected-configmaps-218fe181-c35d-41df-8658-ebb53a62efeb to disappear May 29 23:52:15.680: INFO: Pod pod-projected-configmaps-218fe181-c35d-41df-8658-ebb53a62efeb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:52:15.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3444" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":29,"skipped":421,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:52:15.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 29 23:52:16.307: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 29 23:52:18.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393136, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393136, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393136, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393136, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 29 23:52:21.371: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 29 23:52:21.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3117-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:52:22.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4535" for this suite. STEP: Destroying namespace "webhook-4535-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.993 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":30,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:52:22.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 29 23:52:22.818: INFO: Waiting up to 5m0s for pod "client-containers-d39b0299-2988-4f11-86cf-86cc049f4e38" in namespace "containers-7899" to be "Succeeded or Failed" May 29 23:52:22.828: INFO: Pod "client-containers-d39b0299-2988-4f11-86cf-86cc049f4e38": Phase="Pending", Reason="", readiness=false. Elapsed: 10.236743ms May 29 23:52:24.833: INFO: Pod "client-containers-d39b0299-2988-4f11-86cf-86cc049f4e38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015164678s May 29 23:52:26.837: INFO: Pod "client-containers-d39b0299-2988-4f11-86cf-86cc049f4e38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019410487s STEP: Saw pod success May 29 23:52:26.837: INFO: Pod "client-containers-d39b0299-2988-4f11-86cf-86cc049f4e38" satisfied condition "Succeeded or Failed" May 29 23:52:26.840: INFO: Trying to get logs from node latest-worker2 pod client-containers-d39b0299-2988-4f11-86cf-86cc049f4e38 container test-container: STEP: delete the pod May 29 23:52:26.887: INFO: Waiting for pod client-containers-d39b0299-2988-4f11-86cf-86cc049f4e38 to disappear May 29 23:52:26.894: INFO: Pod client-containers-d39b0299-2988-4f11-86cf-86cc049f4e38 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:52:26.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7899" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":31,"skipped":457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:52:26.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 29 23:52:27.000: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-ca12b765-4482-4a0e-bfbc-0eb32be13d25" in namespace "security-context-test-8155" to be "Succeeded or Failed" May 29 23:52:27.053: INFO: Pod "alpine-nnp-false-ca12b765-4482-4a0e-bfbc-0eb32be13d25": Phase="Pending", Reason="", readiness=false. Elapsed: 52.634611ms May 29 23:52:29.056: INFO: Pod "alpine-nnp-false-ca12b765-4482-4a0e-bfbc-0eb32be13d25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056489947s May 29 23:52:31.060: INFO: Pod "alpine-nnp-false-ca12b765-4482-4a0e-bfbc-0eb32be13d25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060395356s May 29 23:52:31.060: INFO: Pod "alpine-nnp-false-ca12b765-4482-4a0e-bfbc-0eb32be13d25" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:52:31.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8155" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":32,"skipped":482,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:52:31.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 29 23:52:31.174: INFO: Create a RollingUpdate DaemonSet May 29 23:52:31.178: INFO: Check that daemon pods launch on every node of the cluster May 29 23:52:31.205: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:52:31.220: INFO: Number of nodes with available pods: 0 May 29 23:52:31.220: INFO: Node latest-worker is running more than one daemon pod May 29 23:52:32.224: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:52:32.227: INFO: Number of nodes with available pods: 0 May 29 23:52:32.227: INFO: Node latest-worker is running more than one daemon pod May 29 23:52:33.226: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:52:33.229: INFO: Number of nodes with available pods: 0 May 29 23:52:33.229: INFO: Node latest-worker is running more than one daemon pod May 29 23:52:34.373: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:52:34.411: INFO: Number of nodes with available pods: 0 May 29 23:52:34.411: INFO: Node latest-worker is running more than one daemon pod May 29 23:52:35.226: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:52:35.230: INFO: Number of nodes with available pods: 0 May 29 23:52:35.230: INFO: Node latest-worker is running more than one daemon pod May 29 23:52:36.225: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:52:36.293: INFO: Number of nodes with available pods: 2 May 29 23:52:36.293: INFO: Number of running nodes: 2, number of available pods: 2 May 29 23:52:36.293: INFO: Update the DaemonSet to trigger a rollout May 29 23:52:36.320: INFO: Updating DaemonSet daemon-set May 29 23:52:45.362: INFO: Roll back the DaemonSet before rollout is complete May 29 23:52:45.369: INFO: Updating DaemonSet daemon-set May 29 23:52:45.369: INFO: Make sure DaemonSet rollback is complete May 29 23:52:45.378: INFO: Wrong image for pod: daemon-set-6htfl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 29 23:52:45.378: INFO: Pod daemon-set-6htfl is not available May 29 23:52:45.432: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:52:46.436: INFO: Wrong image for pod: daemon-set-6htfl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 29 23:52:46.436: INFO: Pod daemon-set-6htfl is not available May 29 23:52:46.440: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:52:47.503: INFO: Wrong image for pod: daemon-set-6htfl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 29 23:52:47.503: INFO: Pod daemon-set-6htfl is not available May 29 23:52:47.506: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 23:52:48.435: INFO: Pod daemon-set-5dmkp is not available May 29 23:52:48.444: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9504, will wait for the garbage collector to delete the pods May 29 23:52:48.511: INFO: Deleting DaemonSet.extensions daemon-set took: 7.514292ms May 29 23:52:48.911: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.205606ms May 29 23:52:52.415: INFO: Number of nodes with available pods: 0 May 29 23:52:52.415: INFO: Number of running nodes: 0, number of available pods: 0 May 29 23:52:52.418: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9504/daemonsets","resourceVersion":"8729637"},"items":null} May 29 23:52:52.421: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9504/pods","resourceVersion":"8729637"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:52:52.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9504" for this suite. • [SLOW TEST:21.364 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":33,"skipped":483,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:52:52.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-472 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 29 23:52:52.584: INFO: Found 0 stateful pods, waiting for 3 May 29 23:53:02.610: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 29 23:53:02.610: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 29 23:53:02.610: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 29 23:53:12.590: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 29 23:53:12.590: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 29 23:53:12.590: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 29 23:53:12.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-472 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 29 23:53:12.990: INFO: stderr: "I0529 23:53:12.809106 388 log.go:172] (0xc000a9f340) (0xc000aa05a0) Create stream\nI0529 23:53:12.809510 388 log.go:172] (0xc000a9f340) (0xc000aa05a0) Stream added, broadcasting: 1\nI0529 23:53:12.845817 388 log.go:172] (0xc000a9f340) Reply frame received for 1\nI0529 23:53:12.845894 388 log.go:172] (0xc000a9f340) (0xc0005fcf00) Create stream\nI0529 23:53:12.845928 388 log.go:172] (0xc000a9f340) (0xc0005fcf00) Stream added, broadcasting: 3\nI0529 23:53:12.847118 388 log.go:172] (0xc000a9f340) Reply frame received for 3\nI0529 23:53:12.847183 388 log.go:172] (0xc000a9f340) (0xc000388dc0) Create stream\nI0529 23:53:12.847199 388 log.go:172] (0xc000a9f340) (0xc000388dc0) Stream added, broadcasting: 5\nI0529 23:53:12.848301 388 log.go:172] (0xc000a9f340) Reply frame received for 5\nI0529 23:53:12.932762 388 log.go:172] (0xc000a9f340) Data frame received for 5\nI0529 23:53:12.932800 388 log.go:172] (0xc000388dc0) (5) Data frame handling\nI0529 23:53:12.932821 388 log.go:172] (0xc000388dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0529 23:53:12.984170 388 log.go:172] (0xc000a9f340) Data frame received for 3\nI0529 23:53:12.984212 388 log.go:172] (0xc0005fcf00) (3) Data frame handling\nI0529 23:53:12.984242 388 log.go:172] (0xc0005fcf00) (3) Data frame sent\nI0529 23:53:12.984260 388 log.go:172] (0xc000a9f340) Data frame received for 3\nI0529 23:53:12.984276 388 log.go:172] (0xc0005fcf00) (3) Data frame handling\nI0529 23:53:12.984639 388 log.go:172] (0xc000a9f340) Data frame received for 5\nI0529 23:53:12.984680 388 log.go:172] (0xc000388dc0) (5) Data frame handling\nI0529 23:53:12.986118 388 log.go:172] (0xc000a9f340) Data frame received for 1\nI0529 23:53:12.986140 388 log.go:172] (0xc000aa05a0) (1) Data frame handling\nI0529 23:53:12.986154 388 log.go:172] (0xc000aa05a0) (1) Data frame sent\nI0529 23:53:12.986169 388 log.go:172] (0xc000a9f340) (0xc000aa05a0) Stream removed, broadcasting: 1\nI0529 23:53:12.986310 388 log.go:172] (0xc000a9f340) Go away received\nI0529 23:53:12.986464 388 log.go:172] (0xc000a9f340) (0xc000aa05a0) Stream removed, broadcasting: 1\nI0529 23:53:12.986483 388 log.go:172] (0xc000a9f340) (0xc0005fcf00) Stream removed, broadcasting: 3\nI0529 23:53:12.986493 388 log.go:172] (0xc000a9f340) (0xc000388dc0) Stream removed, broadcasting: 5\n" May 29 23:53:12.990: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 29 23:53:12.990: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 29 23:53:23.024: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 29 23:53:33.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-472 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 29 23:53:33.377: INFO: stderr: "I0529 23:53:33.262002 410 log.go:172] (0xc000a8aa50) (0xc000611cc0) Create stream\nI0529 23:53:33.262055 410 log.go:172] (0xc000a8aa50) (0xc000611cc0) Stream added, broadcasting: 1\nI0529 23:53:33.266786 410 log.go:172] (0xc000a8aa50) Reply frame received for 1\nI0529 23:53:33.266818 410 log.go:172] (0xc000a8aa50) (0xc00065fc20) Create stream\nI0529 23:53:33.266827 410 log.go:172] (0xc000a8aa50) (0xc00065fc20) Stream added, broadcasting: 3\nI0529 23:53:33.267815 410 log.go:172] (0xc000a8aa50) Reply frame received for 3\nI0529 23:53:33.267870 410 log.go:172] (0xc000a8aa50) (0xc00055c5a0) Create stream\nI0529 23:53:33.267889 410 log.go:172] (0xc000a8aa50) (0xc00055c5a0) Stream added, broadcasting: 5\nI0529 23:53:33.268839 410 log.go:172] (0xc000a8aa50) Reply frame received for 5\nI0529 23:53:33.366312 410 log.go:172] (0xc000a8aa50) Data frame received for 5\nI0529 23:53:33.366332 410 log.go:172] (0xc00055c5a0) (5) Data frame handling\nI0529 23:53:33.366343 410 log.go:172] (0xc00055c5a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0529 23:53:33.373034 410 log.go:172] (0xc000a8aa50) Data frame received for 3\nI0529 23:53:33.373051 410 log.go:172] (0xc00065fc20) (3) Data frame handling\nI0529 23:53:33.373068 410 log.go:172] (0xc00065fc20) (3) Data frame sent\nI0529 23:53:33.373557 410 log.go:172] (0xc000a8aa50) Data frame received for 5\nI0529 23:53:33.373581 410 log.go:172] (0xc00055c5a0) (5) Data frame handling\nI0529 23:53:33.373649 410 log.go:172] (0xc000a8aa50) Data frame received for 3\nI0529 23:53:33.373661 410 log.go:172] (0xc00065fc20) (3) Data frame handling\nI0529 23:53:33.374765 410 log.go:172] (0xc000a8aa50) Data frame received for 1\nI0529 23:53:33.374786 410 log.go:172] (0xc000611cc0) (1) Data frame handling\nI0529 23:53:33.374799 410 log.go:172] (0xc000611cc0) (1) Data frame sent\nI0529 23:53:33.374809 410 log.go:172] (0xc000a8aa50) (0xc000611cc0) Stream removed, broadcasting: 1\nI0529 23:53:33.374859 410 log.go:172] (0xc000a8aa50) Go away received\nI0529 23:53:33.375007 410 log.go:172] (0xc000a8aa50) (0xc000611cc0) Stream removed, broadcasting: 1\nI0529 23:53:33.375018 410 log.go:172] (0xc000a8aa50) (0xc00065fc20) Stream removed, broadcasting: 3\nI0529 23:53:33.375023 410 log.go:172] (0xc000a8aa50) (0xc00055c5a0) Stream removed, broadcasting: 5\n" May 29 23:53:33.377: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 29 23:53:33.377: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 29 23:53:43.399: INFO: Waiting for StatefulSet statefulset-472/ss2 to complete update May 29 23:53:43.399: INFO: Waiting for Pod statefulset-472/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 29 23:53:43.399: INFO: Waiting for Pod statefulset-472/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 29 23:53:43.399: INFO: Waiting for Pod statefulset-472/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 29 23:53:53.408: INFO: Waiting for StatefulSet statefulset-472/ss2 to complete update May 29 23:53:53.408: INFO: Waiting for Pod statefulset-472/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 29 23:54:03.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-472 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 29 23:54:03.700: INFO: stderr: "I0529 23:54:03.561829 430 log.go:172] (0xc00003a840) (0xc000637d60) Create stream\nI0529 23:54:03.561877 430 log.go:172] (0xc00003a840) (0xc000637d60) Stream added, broadcasting: 1\nI0529 23:54:03.564442 430 log.go:172] (0xc00003a840) Reply frame received for 1\nI0529 23:54:03.564490 430 log.go:172] (0xc00003a840) (0xc0004e66e0) Create stream\nI0529 23:54:03.564506 430 log.go:172] (0xc00003a840) (0xc0004e66e0) Stream added, broadcasting: 3\nI0529 23:54:03.565825 430 log.go:172] (0xc00003a840) Reply frame received for 3\nI0529 23:54:03.565869 430 log.go:172] (0xc00003a840) (0xc0004e6be0) Create stream\nI0529 23:54:03.565883 430 log.go:172] (0xc00003a840) (0xc0004e6be0) Stream added, broadcasting: 5\nI0529 23:54:03.566968 430 log.go:172] (0xc00003a840) Reply frame received for 5\nI0529 23:54:03.661782 430 log.go:172] (0xc00003a840) Data frame received for 5\nI0529 23:54:03.661815 430 log.go:172] (0xc0004e6be0) (5) Data frame handling\nI0529 23:54:03.661837 430 log.go:172] (0xc0004e6be0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0529 23:54:03.692065 430 log.go:172] (0xc00003a840) Data frame received for 3\nI0529 23:54:03.692083 430 log.go:172] (0xc0004e66e0) (3) Data frame handling\nI0529 23:54:03.692098 430 log.go:172] (0xc0004e66e0) (3) Data frame sent\nI0529 23:54:03.692400 430 log.go:172] (0xc00003a840) Data frame received for 5\nI0529 23:54:03.692496 430 log.go:172] (0xc0004e6be0) (5) Data frame handling\nI0529 23:54:03.692683 430 log.go:172] (0xc00003a840) Data frame received for 3\nI0529 23:54:03.692703 430 log.go:172] (0xc0004e66e0) (3) Data frame handling\nI0529 23:54:03.694620 430 log.go:172] (0xc00003a840) Data frame received for 1\nI0529 23:54:03.694635 430 log.go:172] (0xc000637d60) (1) Data frame handling\nI0529 23:54:03.694658 430 log.go:172] (0xc000637d60) (1) Data frame sent\nI0529 23:54:03.694668 430 log.go:172] (0xc00003a840) (0xc000637d60) Stream removed, broadcasting: 1\nI0529 23:54:03.694798 430 log.go:172] (0xc00003a840) Go away received\nI0529 23:54:03.694940 430 log.go:172] (0xc00003a840) (0xc000637d60) Stream removed, broadcasting: 1\nI0529 23:54:03.695014 430 log.go:172] (0xc00003a840) (0xc0004e66e0) Stream removed, broadcasting: 3\nI0529 23:54:03.695021 430 log.go:172] (0xc00003a840) (0xc0004e6be0) Stream removed, broadcasting: 5\n" May 29 23:54:03.700: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 29 23:54:03.700: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 29 23:54:13.733: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 29 23:54:23.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-472 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 29 23:54:24.051: INFO: stderr: "I0529 23:54:23.948756 453 log.go:172] (0xc0009cd3f0) (0xc000b686e0) Create stream\nI0529 23:54:23.948823 453 log.go:172] (0xc0009cd3f0) (0xc000b686e0) Stream added, broadcasting: 1\nI0529 23:54:23.953013 453 log.go:172] (0xc0009cd3f0) Reply frame received for 1\nI0529 23:54:23.953050 453 log.go:172] (0xc0009cd3f0) (0xc00055c640) Create stream\nI0529 23:54:23.953059 453 log.go:172] (0xc0009cd3f0) (0xc00055c640) Stream added, broadcasting: 3\nI0529 23:54:23.954228 453 log.go:172] (0xc0009cd3f0) Reply frame received for 3\nI0529 23:54:23.954274 453 log.go:172] (0xc0009cd3f0) (0xc00055cb40) Create stream\nI0529 23:54:23.954293 453 log.go:172] (0xc0009cd3f0) (0xc00055cb40) Stream added, broadcasting: 5\nI0529 23:54:23.955087 453 log.go:172] (0xc0009cd3f0) Reply frame received for 5\nI0529 23:54:24.043281 453 log.go:172] (0xc0009cd3f0) Data frame received for 3\nI0529 23:54:24.043339 453 log.go:172] (0xc00055c640) (3) Data frame handling\nI0529 23:54:24.043373 453 log.go:172] (0xc00055c640) (3) Data frame sent\nI0529 23:54:24.043390 453 log.go:172] (0xc0009cd3f0) Data frame received for 3\nI0529 23:54:24.043405 453 log.go:172] (0xc00055c640) (3) Data frame handling\nI0529 23:54:24.043438 453 log.go:172] (0xc0009cd3f0) Data frame received for 5\nI0529 23:54:24.043453 453 log.go:172] (0xc00055cb40) (5) Data frame handling\nI0529 23:54:24.043477 453 log.go:172] (0xc00055cb40) (5) Data frame sent\nI0529 23:54:24.043500 453 log.go:172] (0xc0009cd3f0) Data frame received for 5\nI0529 23:54:24.043515 453 log.go:172] (0xc00055cb40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0529 23:54:24.045345 453 log.go:172] (0xc0009cd3f0) Data frame received for 1\nI0529 23:54:24.045363 453 log.go:172] (0xc000b686e0) (1) Data frame handling\nI0529 23:54:24.045377 453 log.go:172] (0xc000b686e0) (1) Data frame sent\nI0529 23:54:24.045628 453 log.go:172] (0xc0009cd3f0) (0xc000b686e0) Stream removed, broadcasting: 1\nI0529 23:54:24.045746 453 log.go:172] (0xc0009cd3f0) Go away received\nI0529 23:54:24.045948 453 log.go:172] (0xc0009cd3f0) (0xc000b686e0) Stream removed, broadcasting: 1\nI0529 23:54:24.045974 453 log.go:172] (0xc0009cd3f0) (0xc00055c640) Stream removed, broadcasting: 3\nI0529 23:54:24.045990 453 log.go:172] (0xc0009cd3f0) (0xc00055cb40) Stream removed, broadcasting: 5\n" May 29 23:54:24.051: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 29 23:54:24.051: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 29 23:54:44.093: INFO: Waiting for StatefulSet statefulset-472/ss2 to complete update May 29 23:54:44.093: INFO: Waiting for Pod statefulset-472/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 29 23:54:54.101: INFO: Deleting all statefulset in ns statefulset-472 May 29 23:54:54.104: INFO: Scaling statefulset ss2 to 0 May 29 23:55:24.144: INFO: Waiting for statefulset status.replicas updated to 0 May 29 23:55:24.147: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:55:24.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-472" for this suite. • [SLOW TEST:151.764 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":34,"skipped":485,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:55:24.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8871.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8871.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8871.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8871.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 29 23:55:32.346: INFO: DNS probes using dns-test-f8561e2b-2845-457f-924c-669e17aa8e58 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8871.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8871.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8871.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8871.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 29 23:55:40.503: INFO: File wheezy_udp@dns-test-service-3.dns-8871.svc.cluster.local from pod dns-8871/dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 23:55:40.506: INFO: File jessie_udp@dns-test-service-3.dns-8871.svc.cluster.local from pod dns-8871/dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 23:55:40.506: INFO: Lookups using dns-8871/dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb failed for: [wheezy_udp@dns-test-service-3.dns-8871.svc.cluster.local jessie_udp@dns-test-service-3.dns-8871.svc.cluster.local] May 29 23:55:45.512: INFO: File wheezy_udp@dns-test-service-3.dns-8871.svc.cluster.local from pod dns-8871/dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 23:55:45.516: INFO: File jessie_udp@dns-test-service-3.dns-8871.svc.cluster.local from pod dns-8871/dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 23:55:45.516: INFO: Lookups using dns-8871/dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb failed for: [wheezy_udp@dns-test-service-3.dns-8871.svc.cluster.local jessie_udp@dns-test-service-3.dns-8871.svc.cluster.local] May 29 23:55:50.511: INFO: File wheezy_udp@dns-test-service-3.dns-8871.svc.cluster.local from pod dns-8871/dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 23:55:50.515: INFO: File jessie_udp@dns-test-service-3.dns-8871.svc.cluster.local from pod dns-8871/dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 23:55:50.515: INFO: Lookups using dns-8871/dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb failed for: [wheezy_udp@dns-test-service-3.dns-8871.svc.cluster.local jessie_udp@dns-test-service-3.dns-8871.svc.cluster.local] May 29 23:55:55.511: INFO: File wheezy_udp@dns-test-service-3.dns-8871.svc.cluster.local from pod dns-8871/dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 23:55:55.515: INFO: File jessie_udp@dns-test-service-3.dns-8871.svc.cluster.local from pod dns-8871/dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 23:55:55.515: INFO: Lookups using dns-8871/dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb failed for: [wheezy_udp@dns-test-service-3.dns-8871.svc.cluster.local jessie_udp@dns-test-service-3.dns-8871.svc.cluster.local] May 29 23:56:00.515: INFO: DNS probes using dns-test-6237d7a4-61fd-4d60-b41a-87bd6d8724eb succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8871.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8871.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8871.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8871.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 29 23:56:07.303: INFO: DNS probes using dns-test-5a470c45-2eaf-4b81-b559-569329b6ab1d succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:56:07.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8871" for this suite. • [SLOW TEST:43.245 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":35,"skipped":491,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:56:07.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 29 23:56:07.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6156' May 29 23:56:07.741: INFO: stderr: "" May 29 23:56:07.741: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 29 23:56:07.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6156' May 29 23:56:11.903: INFO: stderr: "" May 29 23:56:11.903: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:56:11.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6156" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":36,"skipped":525,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:56:11.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 29 23:56:12.039: INFO: Waiting up to 5m0s for pod "pod-af67b118-6430-43e5-859c-67968a3834ae" in namespace "emptydir-4350" to be "Succeeded or Failed" May 29 23:56:12.043: INFO: Pod "pod-af67b118-6430-43e5-859c-67968a3834ae": Phase="Pending", Reason="", readiness=false. Elapsed: 3.885193ms May 29 23:56:14.133: INFO: Pod "pod-af67b118-6430-43e5-859c-67968a3834ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093717444s May 29 23:56:16.223: INFO: Pod "pod-af67b118-6430-43e5-859c-67968a3834ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.184190858s STEP: Saw pod success May 29 23:56:16.223: INFO: Pod "pod-af67b118-6430-43e5-859c-67968a3834ae" satisfied condition "Succeeded or Failed" May 29 23:56:16.226: INFO: Trying to get logs from node latest-worker2 pod pod-af67b118-6430-43e5-859c-67968a3834ae container test-container: STEP: delete the pod May 29 23:56:16.274: INFO: Waiting for pod pod-af67b118-6430-43e5-859c-67968a3834ae to disappear May 29 23:56:16.280: INFO: Pod pod-af67b118-6430-43e5-859c-67968a3834ae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:56:16.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4350" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":37,"skipped":544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:56:16.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 29 23:56:16.772: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 29 23:56:18.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393376, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393376, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393376, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393376, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 29 23:56:21.842: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 29 23:56:21.866: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:56:21.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3558" for this suite. STEP: Destroying namespace "webhook-3558-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.774 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":38,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:56:22.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 29 23:56:35.268: INFO: 5 pods remaining May 29 23:56:35.268: INFO: 5 pods has nil DeletionTimestamp May 29 23:56:35.268: INFO: STEP: Gathering metrics W0529 23:56:40.058821 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 29 23:56:40.058: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:56:40.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-334" for this suite. • [SLOW TEST:18.003 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":39,"skipped":601,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:56:40.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-77e609b1-dd24-4110-a44e-dc0c4240943d STEP: Creating a pod to test consume configMaps May 29 23:56:40.167: INFO: Waiting up to 5m0s for pod "pod-configmaps-ebe1b2b8-841e-49a0-8102-d8f755dc903a" in namespace "configmap-410" to be "Succeeded or Failed" May 29 23:56:40.193: INFO: Pod "pod-configmaps-ebe1b2b8-841e-49a0-8102-d8f755dc903a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.331952ms May 29 23:56:42.198: INFO: Pod "pod-configmaps-ebe1b2b8-841e-49a0-8102-d8f755dc903a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030789228s May 29 23:56:44.203: INFO: Pod "pod-configmaps-ebe1b2b8-841e-49a0-8102-d8f755dc903a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035740851s STEP: Saw pod success May 29 23:56:44.203: INFO: Pod "pod-configmaps-ebe1b2b8-841e-49a0-8102-d8f755dc903a" satisfied condition "Succeeded or Failed" May 29 23:56:44.206: INFO: Trying to get logs from node latest-worker pod pod-configmaps-ebe1b2b8-841e-49a0-8102-d8f755dc903a container configmap-volume-test: STEP: delete the pod May 29 23:56:44.254: INFO: Waiting for pod pod-configmaps-ebe1b2b8-841e-49a0-8102-d8f755dc903a to disappear May 29 23:56:44.261: INFO: Pod pod-configmaps-ebe1b2b8-841e-49a0-8102-d8f755dc903a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:56:44.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-410" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":40,"skipped":608,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:56:44.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8172 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 29 23:56:44.386: INFO: Found 0 stateful pods, waiting for 3 May 29 23:56:54.390: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 29 23:56:54.390: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 29 23:56:54.390: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 29 23:57:04.400: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 29 23:57:04.400: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 29 23:57:04.400: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 29 23:57:04.430: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 29 23:57:14.474: INFO: Updating stateful set ss2 May 29 23:57:14.502: INFO: Waiting for Pod statefulset-8172/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 29 23:57:24.783: INFO: Found 2 stateful pods, waiting for 3 May 29 23:57:34.789: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 29 23:57:34.789: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 29 23:57:34.789: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 29 23:57:34.813: INFO: Updating stateful set ss2 May 29 23:57:34.900: INFO: Waiting for Pod statefulset-8172/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 29 23:57:44.927: INFO: Updating stateful set ss2 May 29 23:57:44.978: INFO: Waiting for StatefulSet statefulset-8172/ss2 to complete update May 29 23:57:44.978: INFO: Waiting for Pod statefulset-8172/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 29 23:57:54.987: INFO: Deleting all statefulset in ns statefulset-8172 May 29 23:57:54.990: INFO: Scaling statefulset ss2 to 0 May 29 23:58:25.030: INFO: Waiting for statefulset status.replicas updated to 0 May 29 23:58:25.035: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:58:25.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8172" for this suite. • [SLOW TEST:100.791 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":41,"skipped":612,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:58:25.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 29 23:58:31.700: INFO: Successfully updated pod "annotationupdate8ae6faaa-e99c-41f6-a9f3-8291175469c5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:58:33.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7165" for this suite. • [SLOW TEST:8.679 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":42,"skipped":621,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:58:33.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 29 23:58:34.231: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 29 23:58:36.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393514, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393514, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393514, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393514, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 29 23:58:39.291: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 29 23:58:39.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:58:40.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5090" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.082 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":43,"skipped":635,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:58:40.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 29 23:58:40.880: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb909b34-9672-4707-9df3-283ff7403a31" in namespace "downward-api-2016" to be "Succeeded or Failed" May 29 23:58:40.937: INFO: Pod "downwardapi-volume-eb909b34-9672-4707-9df3-283ff7403a31": Phase="Pending", Reason="", readiness=false. Elapsed: 57.091513ms May 29 23:58:42.955: INFO: Pod "downwardapi-volume-eb909b34-9672-4707-9df3-283ff7403a31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075621282s May 29 23:58:44.960: INFO: Pod "downwardapi-volume-eb909b34-9672-4707-9df3-283ff7403a31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080143932s STEP: Saw pod success May 29 23:58:44.960: INFO: Pod "downwardapi-volume-eb909b34-9672-4707-9df3-283ff7403a31" satisfied condition "Succeeded or Failed" May 29 23:58:44.963: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-eb909b34-9672-4707-9df3-283ff7403a31 container client-container: STEP: delete the pod May 29 23:58:45.058: INFO: Waiting for pod downwardapi-volume-eb909b34-9672-4707-9df3-283ff7403a31 to disappear May 29 23:58:45.097: INFO: Pod downwardapi-volume-eb909b34-9672-4707-9df3-283ff7403a31 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:58:45.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2016" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":44,"skipped":641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:58:45.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:58:45.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-310" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":45,"skipped":679,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:58:45.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 29 23:58:45.365: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:58:49.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8445" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":46,"skipped":703,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:58:49.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 23:59:49.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8531" for this suite. • [SLOW TEST:60.166 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":47,"skipped":713,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 23:59:49.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-8863 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8863 STEP: Deleting pre-stop pod May 30 00:00:02.886: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:00:02.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8863" for this suite. • [SLOW TEST:13.274 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":48,"skipped":735,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:00:02.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:00:09.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9477" for this suite. • [SLOW TEST:6.639 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":49,"skipped":769,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:00:09.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-8bc68f88-da07-4026-a211-dee9d93cac26 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:00:09.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1292" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":50,"skipped":795,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:00:09.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-d26f8f5c-e524-4336-86b6-9e3bc80ba15e STEP: Creating a pod to test consume secrets May 30 00:00:10.001: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f6296563-a2f9-4038-8f0c-39d61886be9e" in namespace "projected-8154" to be "Succeeded or Failed" May 30 00:00:10.004: INFO: Pod "pod-projected-secrets-f6296563-a2f9-4038-8f0c-39d61886be9e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.551478ms May 30 00:00:12.010: INFO: Pod "pod-projected-secrets-f6296563-a2f9-4038-8f0c-39d61886be9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009277594s May 30 00:00:14.014: INFO: Pod "pod-projected-secrets-f6296563-a2f9-4038-8f0c-39d61886be9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01367853s STEP: Saw pod success May 30 00:00:14.015: INFO: Pod "pod-projected-secrets-f6296563-a2f9-4038-8f0c-39d61886be9e" satisfied condition "Succeeded or Failed" May 30 00:00:14.018: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-f6296563-a2f9-4038-8f0c-39d61886be9e container projected-secret-volume-test: STEP: delete the pod May 30 00:00:14.053: INFO: Waiting for pod pod-projected-secrets-f6296563-a2f9-4038-8f0c-39d61886be9e to disappear May 30 00:00:14.086: INFO: Pod pod-projected-secrets-f6296563-a2f9-4038-8f0c-39d61886be9e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:00:14.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8154" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":51,"skipped":795,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:00:14.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 00:00:14.674: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 00:00:16.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393614, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393614, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393614, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393614, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 00:00:19.770: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 30 00:00:23.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-7642 to-be-attached-pod -i -c=container1' May 30 00:00:26.869: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:00:26.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7642" for this suite. STEP: Destroying namespace "webhook-7642-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.864 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":52,"skipped":796,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:00:26.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:00:27.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7269" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":53,"skipped":812,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:00:27.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-b49e665f-01d9-4c46-8b24-94f7a9c7dc41 STEP: Creating a pod to test consume secrets May 30 00:00:27.153: INFO: Waiting up to 5m0s for pod "pod-secrets-6dcf39e3-bc70-4bd9-a2c3-eb75e73c583e" in namespace "secrets-9420" to be "Succeeded or Failed" May 30 00:00:27.167: INFO: Pod "pod-secrets-6dcf39e3-bc70-4bd9-a2c3-eb75e73c583e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.78305ms May 30 00:00:29.171: INFO: Pod "pod-secrets-6dcf39e3-bc70-4bd9-a2c3-eb75e73c583e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018024807s May 30 00:00:31.179: INFO: Pod "pod-secrets-6dcf39e3-bc70-4bd9-a2c3-eb75e73c583e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026152987s STEP: Saw pod success May 30 00:00:31.180: INFO: Pod "pod-secrets-6dcf39e3-bc70-4bd9-a2c3-eb75e73c583e" satisfied condition "Succeeded or Failed" May 30 00:00:31.183: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-6dcf39e3-bc70-4bd9-a2c3-eb75e73c583e container secret-volume-test: STEP: delete the pod May 30 00:00:31.268: INFO: Waiting for pod pod-secrets-6dcf39e3-bc70-4bd9-a2c3-eb75e73c583e to disappear May 30 00:00:31.335: INFO: Pod pod-secrets-6dcf39e3-bc70-4bd9-a2c3-eb75e73c583e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:00:31.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9420" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":54,"skipped":819,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:00:31.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:00:31.458: INFO: Creating deployment "webserver-deployment" May 30 00:00:31.473: INFO: Waiting for observed generation 1 May 30 00:00:33.557: INFO: Waiting for all required pods to come up May 30 00:00:33.601: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 30 00:00:43.794: INFO: Waiting for deployment "webserver-deployment" to complete May 30 00:00:43.800: INFO: Updating deployment "webserver-deployment" with a non-existent image May 30 00:00:43.808: INFO: Updating deployment webserver-deployment May 30 00:00:43.808: INFO: Waiting for observed generation 2 May 30 00:00:45.920: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 30 00:00:45.923: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 30 00:00:45.941: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 30 00:00:45.968: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 30 00:00:45.968: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 30 00:00:45.970: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 30 00:00:45.973: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 30 00:00:45.973: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 30 00:00:45.980: INFO: Updating deployment webserver-deployment May 30 00:00:45.980: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 30 00:00:46.634: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 30 00:00:47.178: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 30 00:00:49.619: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6792 /apis/apps/v1/namespaces/deployment-6792/deployments/webserver-deployment e5c44789-6e0d-4531-92c8-7737f85494f9 8732870 3 2020-05-30 00:00:31 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-30 00:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-30 00:00:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033e0608 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-30 00:00:46 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-30 00:00:47 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 30 00:00:50.208: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-6792 /apis/apps/v1/namespaces/deployment-6792/replicasets/webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 8732856 3 2020-05-30 00:00:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e5c44789-6e0d-4531-92c8-7737f85494f9 0xc003280117 0xc003280118}] [] [{kube-controller-manager Update apps/v1 2020-05-30 00:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5c44789-6e0d-4531-92c8-7737f85494f9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003280198 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 00:00:50.208: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 30 00:00:50.208: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-6792 /apis/apps/v1/namespaces/deployment-6792/replicasets/webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 8732864 3 2020-05-30 00:00:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e5c44789-6e0d-4531-92c8-7737f85494f9 0xc0032801f7 0xc0032801f8}] [] [{kube-controller-manager Update apps/v1 2020-05-30 00:00:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5c44789-6e0d-4531-92c8-7737f85494f9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003280268 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 30 00:00:50.394: INFO: Pod "webserver-deployment-6676bcd6d4-2mg7t" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2mg7t webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-2mg7t b8a286f7-906c-4de8-b6f8-c29abfce9849 8732828 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc001d519e7 0xc001d519e8}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.395: INFO: Pod "webserver-deployment-6676bcd6d4-524fs" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-524fs webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-524fs fae2f599-ee0b-4944-9510-010ae3ba0df7 8732827 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc001d51b27 0xc001d51b28}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.395: INFO: Pod "webserver-deployment-6676bcd6d4-5kjz7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5kjz7 webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-5kjz7 9de16bf2-19ba-4577-b354-6b8adb4b8aae 8732885 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc001d51c67 0xc001d51c68}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-30 00:00:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.395: INFO: Pod "webserver-deployment-6676bcd6d4-5mdp2" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5mdp2 webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-5mdp2 74125a0d-4f57-42fa-80a1-0bdeb6c86ab9 8732838 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc001d51e37 0xc001d51e38}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.395: INFO: Pod "webserver-deployment-6676bcd6d4-8tdcg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8tdcg webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-8tdcg 74ae569a-28bb-408b-a097-0b0658aefbac 8732766 0 2020-05-30 00:00:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc001d51f77 0xc001d51f78}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-30 00:00:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.396: INFO: Pod "webserver-deployment-6676bcd6d4-bbmlb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bbmlb webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-bbmlb c4ac547c-fa4d-4986-a3e9-774004739489 8732752 0 2020-05-30 00:00:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc003116207 0xc003116208}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-30 00:00:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.396: INFO: Pod "webserver-deployment-6676bcd6d4-d5flr" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-d5flr webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-d5flr 9667885d-290f-48ee-b3b9-a1a0931b0ff6 8732890 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc003116527 0xc003116528}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-30 00:00:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.396: INFO: Pod "webserver-deployment-6676bcd6d4-lhzp7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lhzp7 webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-lhzp7 336010be-5338-4942-a2a2-70fbb19e4388 8732765 0 2020-05-30 00:00:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc0031166e7 0xc0031166e8}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-30 00:00:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.397: INFO: Pod "webserver-deployment-6676bcd6d4-mh875" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mh875 webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-mh875 5333c8d0-fd2f-488c-9ea1-1746dfab52ef 8732862 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc003116897 0xc003116898}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-30 00:00:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.397: INFO: Pod "webserver-deployment-6676bcd6d4-skzns" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-skzns webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-skzns 970781d6-0bdf-4d8f-8837-71b8f7938b1a 8732855 0 2020-05-30 00:00:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc003116a47 0xc003116a48}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.71\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.71,StartTime:2020-05-30 00:00:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.397: INFO: Pod "webserver-deployment-6676bcd6d4-v8bx8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-v8bx8 webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-v8bx8 76579bbc-3a0f-4285-a9b7-68b2f3292d90 8732850 0 2020-05-30 00:00:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc003116cd7 0xc003116cd8}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.398: INFO: Pod "webserver-deployment-6676bcd6d4-whvz8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-whvz8 webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-whvz8 e800e5f7-a6b7-4000-8e02-a8e6cb786fce 8732897 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc003116ea7 0xc003116ea8}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-30 00:00:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.398: INFO: Pod "webserver-deployment-6676bcd6d4-zpqqm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zpqqm webserver-deployment-6676bcd6d4- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-6676bcd6d4-zpqqm 8301a180-3f41-468d-bd72-12d04eaa950d 8732768 0 2020-05-30 00:00:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32ed29f4-6fd4-4136-bf33-54c454f55a01 0xc003117137 0xc003117138}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32ed29f4-6fd4-4136-bf33-54c454f55a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-30 00:00:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.398: INFO: Pod "webserver-deployment-84855cf797-2d6k8" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2d6k8 webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-2d6k8 a9236933-e8c9-4ce8-855f-72f1e9a495f1 8732681 0 2020-05-30 00:00:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc003117427 0xc003117428}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.67\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.67,StartTime:2020-05-30 00:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 00:00:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ebafc3e6aafd8be1e6006e93585a8570d73e667beb3fb0ff47fd87a131196e88,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.398: INFO: Pod "webserver-deployment-84855cf797-4svjq" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4svjq webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-4svjq 77a787bd-cb74-4ffd-b6eb-64f93df25acb 8732865 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc003117757 0xc003117758}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-30 00:00:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.398: INFO: Pod "webserver-deployment-84855cf797-5zpnx" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5zpnx webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-5zpnx 2f0886b9-e0a7-4d7b-92de-4b84c9eb300d 8732699 0 2020-05-30 00:00:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc0031179b7 0xc0031179b8}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.87\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.87,StartTime:2020-05-30 00:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 00:00:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e8b2df966a8579467a5fd7c170f00c8b2ce4612037fc86fc422864e88c6e6c9a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.87,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.399: INFO: Pod "webserver-deployment-84855cf797-695mr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-695mr webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-695mr ddc15dc4-0095-4a06-af6a-6179c0c372f7 8732834 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc003117ce7 0xc003117ce8}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.399: INFO: Pod "webserver-deployment-84855cf797-77tpz" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-77tpz webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-77tpz e5239731-5ef4-4759-9746-40ff8d38d5a7 8732704 0 2020-05-30 00:00:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc003117e17 0xc003117e18}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.69\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.69,StartTime:2020-05-30 00:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 00:00:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5314fc2e22ff0000eb016f03ef7498b0fd5e83820fa878f40caf355ab7168bd1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.429: INFO: Pod "webserver-deployment-84855cf797-7w44p" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7w44p webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-7w44p ce2101df-d716-4357-a441-86465c66a5a9 8732835 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc003117fe7 0xc003117fe8}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.429: INFO: Pod "webserver-deployment-84855cf797-8f5rd" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8f5rd webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-8f5rd f23951e4-3a36-4acd-91c4-dac2266048f2 8732829 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002f6c117 0xc002f6c118}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.429: INFO: Pod "webserver-deployment-84855cf797-b98x9" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-b98x9 webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-b98x9 3cf53cf4-6307-4b70-a53e-93a00f4268fe 8732888 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002f6c247 0xc002f6c248}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-30 00:00:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.429: INFO: Pod "webserver-deployment-84855cf797-bmntz" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bmntz webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-bmntz 39517794-fb0d-48f1-8d9d-1184c339093a 8732832 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002f6c3d7 0xc002f6c3d8}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.430: INFO: Pod "webserver-deployment-84855cf797-cgbp9" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cgbp9 webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-cgbp9 dc0f4373-71dd-46f8-823b-74c0e9df6a3b 8732831 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002f6c507 0xc002f6c508}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.430: INFO: Pod "webserver-deployment-84855cf797-fnqg4" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fnqg4 webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-fnqg4 b4a62e36-0ecb-4a78-8d59-6af69c98f9d5 8732848 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002f6c677 0xc002f6c678}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-30 00:00:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.430: INFO: Pod "webserver-deployment-84855cf797-gp9zk" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-gp9zk webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-gp9zk c3b026cb-fb40-4bc8-94de-c29486a9e58c 8732891 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002f6ca47 0xc002f6ca48}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-30 00:00:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.430: INFO: Pod "webserver-deployment-84855cf797-jh8kj" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jh8kj webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-jh8kj 83003762-c74c-4d06-bd4d-a34bf051c0cd 8732710 0 2020-05-30 00:00:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002f6d2b7 0xc002f6d2b8}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.70\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.70,StartTime:2020-05-30 00:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 00:00:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://60e85716a536c3e213e046ee2d1639c323c13de60ed868a10697a467699287a5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.430: INFO: Pod "webserver-deployment-84855cf797-jhd5n" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jhd5n webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-jhd5n 9252e1a7-8e2e-49e4-84d1-0176657c8d58 8732667 0 2020-05-30 00:00:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002f6d697 0xc002f6d698}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.82\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.82,StartTime:2020-05-30 00:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 00:00:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8e4e2d8badd74595cd28f1761576c3640492364cbdde17e89b9bd679b8ed65a4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.431: INFO: Pod "webserver-deployment-84855cf797-jxlp9" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jxlp9 webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-jxlp9 85f246be-e713-400e-baa9-9f03875cd10b 8732663 0 2020-05-30 00:00:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002f6db17 0xc002f6db18}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.84\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.84,StartTime:2020-05-30 00:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 00:00:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fd53d966c3f80f9f6fdaefb01fc18b31510fec4e0dbfff4b83561d45ebb4cdbf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.431: INFO: Pod "webserver-deployment-84855cf797-l9bfk" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-l9bfk webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-l9bfk b3b9264e-49ec-44c2-baf5-308595d7f588 8732876 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002e96947 0xc002e96948}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-30 00:00:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.431: INFO: Pod "webserver-deployment-84855cf797-lxzl2" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lxzl2 webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-lxzl2 f90a6b91-f179-4b00-8723-85c3a842631a 8732656 0 2020-05-30 00:00:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002e96f47 0xc002e96f48}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.83\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.83,StartTime:2020-05-30 00:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 00:00:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3853522d8559632861c5f71ee0a339d3a3b928683ca9c2a77d7a37e07d9a8f35,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.431: INFO: Pod "webserver-deployment-84855cf797-s6kmr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-s6kmr webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-s6kmr b29727ce-065e-4c75-a6c6-8ae2fc6423a9 8732880 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002e972a7 0xc002e972a8}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-30 00:00:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.431: INFO: Pod "webserver-deployment-84855cf797-sgr4c" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-sgr4c webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-sgr4c f615e71d-45ac-4177-beb2-057a7777c3b1 8732896 0 2020-05-30 00:00:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002e97607 0xc002e97608}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-30 00:00:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:00:50.431: INFO: Pod "webserver-deployment-84855cf797-zmpk9" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zmpk9 webserver-deployment-84855cf797- deployment-6792 /api/v1/namespaces/deployment-6792/pods/webserver-deployment-84855cf797-zmpk9 cad01611-2ee2-498d-88e9-4b78de12ccf6 8732676 0 2020-05-30 00:00:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 8a4576f5-e94c-46f6-b812-5dc627030b6b 0xc002e97797 0xc002e97798}] [] [{kube-controller-manager Update v1 2020-05-30 00:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a4576f5-e94c-46f6-b812-5dc627030b6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:00:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.68\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-29wpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-29wpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-29wpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.68,StartTime:2020-05-30 00:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 00:00:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e07e4a68eabe5e685ecdf37c2543a7d7b88f5a82624de5590c8271769b7077f7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:00:50.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6792" for this suite. • [SLOW TEST:20.821 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":55,"skipped":836,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:00:52.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 30 00:00:53.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 30 00:00:53.861: INFO: stderr: "" May 30 00:00:53.861: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:00:53.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-75" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":56,"skipped":851,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:00:54.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:00:56.087: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:01:07.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7250" for this suite. • [SLOW TEST:13.403 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":57,"skipped":859,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:01:07.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:01:08.644: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Pending, waiting for it to be Running (with Ready = true) May 30 00:01:10.992: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Pending, waiting for it to be Running (with Ready = true) May 30 00:01:13.308: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Pending, waiting for it to be Running (with Ready = true) May 30 00:01:14.782: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Pending, waiting for it to be Running (with Ready = true) May 30 00:01:16.687: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Running (Ready = false) May 30 00:01:18.863: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Running (Ready = false) May 30 00:01:20.682: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Running (Ready = false) May 30 00:01:22.699: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Running (Ready = false) May 30 00:01:24.651: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Running (Ready = false) May 30 00:01:26.651: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Running (Ready = false) May 30 00:01:28.649: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Running (Ready = false) May 30 00:01:30.649: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Running (Ready = false) May 30 00:01:32.649: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Running (Ready = false) May 30 00:01:34.649: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Running (Ready = false) May 30 00:01:36.648: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Running (Ready = false) May 30 00:01:38.648: INFO: The status of Pod test-webserver-ab23543c-bd51-4872-ac32-adf10d691f13 is Running (Ready = true) May 30 00:01:38.651: INFO: Container started at 2020-05-30 00:01:15 +0000 UTC, pod became ready at 2020-05-30 00:01:37 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:01:38.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9212" for this suite. • [SLOW TEST:31.017 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":58,"skipped":885,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:01:38.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1765 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1765 STEP: Creating statefulset with conflicting port in namespace statefulset-1765 STEP: Waiting until pod test-pod will start running in namespace statefulset-1765 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1765 May 30 00:01:45.168: INFO: Observed stateful pod in namespace: statefulset-1765, name: ss-0, uid: d32c3d09-6f54-404c-85fe-ea2efae886bf, status phase: Pending. Waiting for statefulset controller to delete. May 30 00:01:45.694: INFO: Observed stateful pod in namespace: statefulset-1765, name: ss-0, uid: d32c3d09-6f54-404c-85fe-ea2efae886bf, status phase: Failed. Waiting for statefulset controller to delete. May 30 00:01:45.733: INFO: Observed stateful pod in namespace: statefulset-1765, name: ss-0, uid: d32c3d09-6f54-404c-85fe-ea2efae886bf, status phase: Failed. Waiting for statefulset controller to delete. May 30 00:01:45.807: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1765 STEP: Removing pod with conflicting port in namespace statefulset-1765 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1765 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 30 00:01:49.893: INFO: Deleting all statefulset in ns statefulset-1765 May 30 00:01:49.895: INFO: Scaling statefulset ss to 0 May 30 00:02:09.980: INFO: Waiting for statefulset status.replicas updated to 0 May 30 00:02:09.984: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:02:10.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1765" for this suite. • [SLOW TEST:31.350 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":59,"skipped":892,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:02:10.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 30 00:02:10.124: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2875 /api/v1/namespaces/watch-2875/configmaps/e2e-watch-test-watch-closed 792df248-b1fe-497d-8f95-e50930683d80 8733649 0 2020-05-30 00:02:10 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-30 00:02:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 30 00:02:10.124: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2875 /api/v1/namespaces/watch-2875/configmaps/e2e-watch-test-watch-closed 792df248-b1fe-497d-8f95-e50930683d80 8733650 0 2020-05-30 00:02:10 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-30 00:02:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 30 00:02:10.136: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2875 /api/v1/namespaces/watch-2875/configmaps/e2e-watch-test-watch-closed 792df248-b1fe-497d-8f95-e50930683d80 8733651 0 2020-05-30 00:02:10 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-30 00:02:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 30 00:02:10.136: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2875 /api/v1/namespaces/watch-2875/configmaps/e2e-watch-test-watch-closed 792df248-b1fe-497d-8f95-e50930683d80 8733652 0 2020-05-30 00:02:10 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-30 00:02:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:02:10.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2875" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":60,"skipped":895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:02:10.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:02:10.253: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-ec5cb8cd-e984-43b9-9d50-5daabc262a79" in namespace "security-context-test-1828" to be "Succeeded or Failed" May 30 00:02:10.324: INFO: Pod "busybox-privileged-false-ec5cb8cd-e984-43b9-9d50-5daabc262a79": Phase="Pending", Reason="", readiness=false. Elapsed: 70.175326ms May 30 00:02:12.328: INFO: Pod "busybox-privileged-false-ec5cb8cd-e984-43b9-9d50-5daabc262a79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074593477s May 30 00:02:14.352: INFO: Pod "busybox-privileged-false-ec5cb8cd-e984-43b9-9d50-5daabc262a79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098368841s May 30 00:02:14.352: INFO: Pod "busybox-privileged-false-ec5cb8cd-e984-43b9-9d50-5daabc262a79" satisfied condition "Succeeded or Failed" May 30 00:02:14.367: INFO: Got logs for pod "busybox-privileged-false-ec5cb8cd-e984-43b9-9d50-5daabc262a79": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:02:14.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1828" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":61,"skipped":945,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:02:14.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 30 00:02:14.572: INFO: Created pod &Pod{ObjectMeta:{dns-5281 dns-5281 /api/v1/namespaces/dns-5281/pods/dns-5281 f792ce46-daf6-4b65-bd0a-f42b3c2bea5b 8733685 0 2020-05-30 00:02:14 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-30 00:02:14 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42drn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42drn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42drn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:02:14.590: INFO: The status of Pod dns-5281 is Pending, waiting for it to be Running (with Ready = true) May 30 00:02:16.678: INFO: The status of Pod dns-5281 is Pending, waiting for it to be Running (with Ready = true) May 30 00:02:18.593: INFO: The status of Pod dns-5281 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 30 00:02:18.593: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5281 PodName:dns-5281 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:02:18.594: INFO: >>> kubeConfig: /root/.kube/config I0530 00:02:18.632837 7 log.go:172] (0xc002f6e6e0) (0xc002da3900) Create stream I0530 00:02:18.632882 7 log.go:172] (0xc002f6e6e0) (0xc002da3900) Stream added, broadcasting: 1 I0530 00:02:18.636141 7 log.go:172] (0xc002f6e6e0) Reply frame received for 1 I0530 00:02:18.636186 7 log.go:172] (0xc002f6e6e0) (0xc001d5fc20) Create stream I0530 00:02:18.636204 7 log.go:172] (0xc002f6e6e0) (0xc001d5fc20) Stream added, broadcasting: 3 I0530 00:02:18.639862 7 log.go:172] (0xc002f6e6e0) Reply frame received for 3 I0530 00:02:18.639901 7 log.go:172] (0xc002f6e6e0) (0xc0011f1540) Create stream I0530 00:02:18.639914 7 log.go:172] (0xc002f6e6e0) (0xc0011f1540) Stream added, broadcasting: 5 I0530 00:02:18.640689 7 log.go:172] (0xc002f6e6e0) Reply frame received for 5 I0530 00:02:18.750591 7 log.go:172] (0xc002f6e6e0) Data frame received for 3 I0530 00:02:18.750619 7 log.go:172] (0xc001d5fc20) (3) Data frame handling I0530 00:02:18.750796 7 log.go:172] (0xc001d5fc20) (3) Data frame sent I0530 00:02:18.751784 7 log.go:172] (0xc002f6e6e0) Data frame received for 5 I0530 00:02:18.751854 7 log.go:172] (0xc0011f1540) (5) Data frame handling I0530 00:02:18.752037 7 log.go:172] (0xc002f6e6e0) Data frame received for 3 I0530 00:02:18.752062 7 log.go:172] (0xc001d5fc20) (3) Data frame handling I0530 00:02:18.754071 7 log.go:172] (0xc002f6e6e0) Data frame received for 1 I0530 00:02:18.754101 7 log.go:172] (0xc002da3900) (1) Data frame handling I0530 00:02:18.754123 7 log.go:172] (0xc002da3900) (1) Data frame sent I0530 00:02:18.754141 7 log.go:172] (0xc002f6e6e0) (0xc002da3900) Stream removed, broadcasting: 1 I0530 00:02:18.754159 7 log.go:172] (0xc002f6e6e0) Go away received I0530 00:02:18.754314 7 log.go:172] (0xc002f6e6e0) (0xc002da3900) Stream removed, broadcasting: 1 I0530 00:02:18.754345 7 log.go:172] (0xc002f6e6e0) (0xc001d5fc20) Stream removed, broadcasting: 3 I0530 00:02:18.754361 7 log.go:172] (0xc002f6e6e0) (0xc0011f1540) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 30 00:02:18.754: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5281 PodName:dns-5281 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:02:18.754: INFO: >>> kubeConfig: /root/.kube/config I0530 00:02:18.784577 7 log.go:172] (0xc002f6ed10) (0xc002da3ae0) Create stream I0530 00:02:18.784608 7 log.go:172] (0xc002f6ed10) (0xc002da3ae0) Stream added, broadcasting: 1 I0530 00:02:18.786916 7 log.go:172] (0xc002f6ed10) Reply frame received for 1 I0530 00:02:18.786956 7 log.go:172] (0xc002f6ed10) (0xc002da3c20) Create stream I0530 00:02:18.786969 7 log.go:172] (0xc002f6ed10) (0xc002da3c20) Stream added, broadcasting: 3 I0530 00:02:18.787851 7 log.go:172] (0xc002f6ed10) Reply frame received for 3 I0530 00:02:18.787879 7 log.go:172] (0xc002f6ed10) (0xc0011f15e0) Create stream I0530 00:02:18.787889 7 log.go:172] (0xc002f6ed10) (0xc0011f15e0) Stream added, broadcasting: 5 I0530 00:02:18.788626 7 log.go:172] (0xc002f6ed10) Reply frame received for 5 I0530 00:02:18.860571 7 log.go:172] (0xc002f6ed10) Data frame received for 3 I0530 00:02:18.860611 7 log.go:172] (0xc002da3c20) (3) Data frame handling I0530 00:02:18.860709 7 log.go:172] (0xc002da3c20) (3) Data frame sent I0530 00:02:18.861985 7 log.go:172] (0xc002f6ed10) Data frame received for 3 I0530 00:02:18.862021 7 log.go:172] (0xc002da3c20) (3) Data frame handling I0530 00:02:18.862041 7 log.go:172] (0xc002f6ed10) Data frame received for 5 I0530 00:02:18.862052 7 log.go:172] (0xc0011f15e0) (5) Data frame handling I0530 00:02:18.864528 7 log.go:172] (0xc002f6ed10) Data frame received for 1 I0530 00:02:18.864567 7 log.go:172] (0xc002da3ae0) (1) Data frame handling I0530 00:02:18.864588 7 log.go:172] (0xc002da3ae0) (1) Data frame sent I0530 00:02:18.864609 7 log.go:172] (0xc002f6ed10) (0xc002da3ae0) Stream removed, broadcasting: 1 I0530 00:02:18.864743 7 log.go:172] (0xc002f6ed10) (0xc002da3ae0) Stream removed, broadcasting: 1 I0530 00:02:18.864765 7 log.go:172] (0xc002f6ed10) (0xc002da3c20) Stream removed, broadcasting: 3 I0530 00:02:18.864783 7 log.go:172] (0xc002f6ed10) (0xc0011f15e0) Stream removed, broadcasting: 5 May 30 00:02:18.864: INFO: Deleting pod dns-5281... I0530 00:02:18.865430 7 log.go:172] (0xc002f6ed10) Go away received [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:02:18.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5281" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":62,"skipped":959,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:02:19.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-b46efb01-1495-4308-b7d9-98c89964a2ed STEP: Creating configMap with name cm-test-opt-upd-63567629-0739-4d2b-80ed-3d24b909fa1e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b46efb01-1495-4308-b7d9-98c89964a2ed STEP: Updating configmap cm-test-opt-upd-63567629-0739-4d2b-80ed-3d24b909fa1e STEP: Creating configMap with name cm-test-opt-create-e9b6150b-091a-429b-89b0-52a3c5b211f2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:02:28.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1369" for this suite. • [SLOW TEST:9.240 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":63,"skipped":992,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:02:28.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:02:28.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 30 00:02:28.486: INFO: stderr: "" May 30 00:02:28.486: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:02:28.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6917" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":64,"skipped":1005,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:02:28.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 30 00:02:28.603: INFO: Waiting up to 5m0s for pod "downward-api-5f3633d5-b802-45a4-a59c-e6cb4cb1d9bf" in namespace "downward-api-4301" to be "Succeeded or Failed" May 30 00:02:28.612: INFO: Pod "downward-api-5f3633d5-b802-45a4-a59c-e6cb4cb1d9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.667734ms May 30 00:02:30.633: INFO: Pod "downward-api-5f3633d5-b802-45a4-a59c-e6cb4cb1d9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029900498s May 30 00:02:32.638: INFO: Pod "downward-api-5f3633d5-b802-45a4-a59c-e6cb4cb1d9bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034506497s STEP: Saw pod success May 30 00:02:32.638: INFO: Pod "downward-api-5f3633d5-b802-45a4-a59c-e6cb4cb1d9bf" satisfied condition "Succeeded or Failed" May 30 00:02:32.641: INFO: Trying to get logs from node latest-worker pod downward-api-5f3633d5-b802-45a4-a59c-e6cb4cb1d9bf container dapi-container: STEP: delete the pod May 30 00:02:32.714: INFO: Waiting for pod downward-api-5f3633d5-b802-45a4-a59c-e6cb4cb1d9bf to disappear May 30 00:02:32.726: INFO: Pod downward-api-5f3633d5-b802-45a4-a59c-e6cb4cb1d9bf no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:02:32.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4301" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":1011,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:02:32.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 30 00:04:33.595: INFO: Successfully updated pod "var-expansion-e2c2240d-3fc1-4d48-9183-d495b46273f7" STEP: waiting for pod running STEP: deleting the pod gracefully May 30 00:04:35.668: INFO: Deleting pod "var-expansion-e2c2240d-3fc1-4d48-9183-d495b46273f7" in namespace "var-expansion-9311" May 30 00:04:35.672: INFO: Wait up to 5m0s for pod "var-expansion-e2c2240d-3fc1-4d48-9183-d495b46273f7" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:05:15.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9311" for this suite. • [SLOW TEST:162.931 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":66,"skipped":1012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:05:15.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:05:19.874: INFO: Waiting up to 5m0s for pod "client-envvars-0416f777-9548-48d9-adc7-b30a0945f130" in namespace "pods-9089" to be "Succeeded or Failed" May 30 00:05:19.941: INFO: Pod "client-envvars-0416f777-9548-48d9-adc7-b30a0945f130": Phase="Pending", Reason="", readiness=false. Elapsed: 66.683777ms May 30 00:05:21.945: INFO: Pod "client-envvars-0416f777-9548-48d9-adc7-b30a0945f130": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070990819s May 30 00:05:23.950: INFO: Pod "client-envvars-0416f777-9548-48d9-adc7-b30a0945f130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07513141s STEP: Saw pod success May 30 00:05:23.950: INFO: Pod "client-envvars-0416f777-9548-48d9-adc7-b30a0945f130" satisfied condition "Succeeded or Failed" May 30 00:05:23.952: INFO: Trying to get logs from node latest-worker2 pod client-envvars-0416f777-9548-48d9-adc7-b30a0945f130 container env3cont: STEP: delete the pod May 30 00:05:24.019: INFO: Waiting for pod client-envvars-0416f777-9548-48d9-adc7-b30a0945f130 to disappear May 30 00:05:24.030: INFO: Pod client-envvars-0416f777-9548-48d9-adc7-b30a0945f130 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:05:24.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9089" for this suite. • [SLOW TEST:8.344 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":67,"skipped":1038,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:05:24.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:05:24.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6793" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":68,"skipped":1043,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:05:24.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 30 00:05:24.364: INFO: Waiting up to 5m0s for pod "pod-7465016f-5de3-42e6-aa4a-5db508e0dd41" in namespace "emptydir-3543" to be "Succeeded or Failed" May 30 00:05:24.378: INFO: Pod "pod-7465016f-5de3-42e6-aa4a-5db508e0dd41": Phase="Pending", Reason="", readiness=false. Elapsed: 13.73987ms May 30 00:05:26.383: INFO: Pod "pod-7465016f-5de3-42e6-aa4a-5db508e0dd41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01940457s May 30 00:05:28.386: INFO: Pod "pod-7465016f-5de3-42e6-aa4a-5db508e0dd41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022632615s STEP: Saw pod success May 30 00:05:28.386: INFO: Pod "pod-7465016f-5de3-42e6-aa4a-5db508e0dd41" satisfied condition "Succeeded or Failed" May 30 00:05:28.389: INFO: Trying to get logs from node latest-worker2 pod pod-7465016f-5de3-42e6-aa4a-5db508e0dd41 container test-container: STEP: delete the pod May 30 00:05:28.423: INFO: Waiting for pod pod-7465016f-5de3-42e6-aa4a-5db508e0dd41 to disappear May 30 00:05:28.431: INFO: Pod pod-7465016f-5de3-42e6-aa4a-5db508e0dd41 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:05:28.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3543" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":69,"skipped":1049,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:05:28.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 30 00:05:28.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8322' May 30 00:05:28.886: INFO: stderr: "" May 30 00:05:28.886: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 30 00:05:28.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8322' May 30 00:05:29.060: INFO: stderr: "" May 30 00:05:29.060: INFO: stdout: "update-demo-nautilus-m9tg8 update-demo-nautilus-wjxkx " May 30 00:05:29.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9tg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8322' May 30 00:05:29.167: INFO: stderr: "" May 30 00:05:29.167: INFO: stdout: "" May 30 00:05:29.167: INFO: update-demo-nautilus-m9tg8 is created but not running May 30 00:05:34.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8322' May 30 00:05:34.270: INFO: stderr: "" May 30 00:05:34.270: INFO: stdout: "update-demo-nautilus-m9tg8 update-demo-nautilus-wjxkx " May 30 00:05:34.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9tg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8322' May 30 00:05:34.382: INFO: stderr: "" May 30 00:05:34.382: INFO: stdout: "true" May 30 00:05:34.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9tg8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8322' May 30 00:05:34.499: INFO: stderr: "" May 30 00:05:34.499: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 00:05:34.499: INFO: validating pod update-demo-nautilus-m9tg8 May 30 00:05:34.503: INFO: got data: { "image": "nautilus.jpg" } May 30 00:05:34.503: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 00:05:34.503: INFO: update-demo-nautilus-m9tg8 is verified up and running May 30 00:05:34.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjxkx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8322' May 30 00:05:34.604: INFO: stderr: "" May 30 00:05:34.604: INFO: stdout: "true" May 30 00:05:34.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjxkx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8322' May 30 00:05:34.734: INFO: stderr: "" May 30 00:05:34.734: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 00:05:34.734: INFO: validating pod update-demo-nautilus-wjxkx May 30 00:05:34.738: INFO: got data: { "image": "nautilus.jpg" } May 30 00:05:34.738: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 00:05:34.738: INFO: update-demo-nautilus-wjxkx is verified up and running STEP: scaling down the replication controller May 30 00:05:34.740: INFO: scanned /root for discovery docs: May 30 00:05:34.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8322' May 30 00:05:35.892: INFO: stderr: "" May 30 00:05:35.892: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 30 00:05:35.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8322' May 30 00:05:36.005: INFO: stderr: "" May 30 00:05:36.005: INFO: stdout: "update-demo-nautilus-m9tg8 update-demo-nautilus-wjxkx " STEP: Replicas for name=update-demo: expected=1 actual=2 May 30 00:05:41.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8322' May 30 00:05:41.116: INFO: stderr: "" May 30 00:05:41.116: INFO: stdout: "update-demo-nautilus-wjxkx " May 30 00:05:41.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjxkx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8322' May 30 00:05:41.227: INFO: stderr: "" May 30 00:05:41.227: INFO: stdout: "true" May 30 00:05:41.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjxkx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8322' May 30 00:05:41.321: INFO: stderr: "" May 30 00:05:41.321: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 00:05:41.321: INFO: validating pod update-demo-nautilus-wjxkx May 30 00:05:41.324: INFO: got data: { "image": "nautilus.jpg" } May 30 00:05:41.324: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 00:05:41.324: INFO: update-demo-nautilus-wjxkx is verified up and running STEP: scaling up the replication controller May 30 00:05:41.327: INFO: scanned /root for discovery docs: May 30 00:05:41.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8322' May 30 00:05:42.510: INFO: stderr: "" May 30 00:05:42.510: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 30 00:05:42.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8322' May 30 00:05:42.630: INFO: stderr: "" May 30 00:05:42.630: INFO: stdout: "update-demo-nautilus-nxbnr update-demo-nautilus-wjxkx " May 30 00:05:42.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nxbnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8322' May 30 00:05:42.717: INFO: stderr: "" May 30 00:05:42.717: INFO: stdout: "" May 30 00:05:42.717: INFO: update-demo-nautilus-nxbnr is created but not running May 30 00:05:47.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8322' May 30 00:05:47.832: INFO: stderr: "" May 30 00:05:47.832: INFO: stdout: "update-demo-nautilus-nxbnr update-demo-nautilus-wjxkx " May 30 00:05:47.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nxbnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8322' May 30 00:05:47.930: INFO: stderr: "" May 30 00:05:47.930: INFO: stdout: "true" May 30 00:05:47.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nxbnr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8322' May 30 00:05:48.030: INFO: stderr: "" May 30 00:05:48.030: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 00:05:48.030: INFO: validating pod update-demo-nautilus-nxbnr May 30 00:05:48.035: INFO: got data: { "image": "nautilus.jpg" } May 30 00:05:48.035: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 00:05:48.036: INFO: update-demo-nautilus-nxbnr is verified up and running May 30 00:05:48.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjxkx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8322' May 30 00:05:48.145: INFO: stderr: "" May 30 00:05:48.145: INFO: stdout: "true" May 30 00:05:48.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjxkx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8322' May 30 00:05:48.248: INFO: stderr: "" May 30 00:05:48.248: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 30 00:05:48.248: INFO: validating pod update-demo-nautilus-wjxkx May 30 00:05:48.251: INFO: got data: { "image": "nautilus.jpg" } May 30 00:05:48.251: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 30 00:05:48.251: INFO: update-demo-nautilus-wjxkx is verified up and running STEP: using delete to clean up resources May 30 00:05:48.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8322' May 30 00:05:48.359: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 00:05:48.359: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 30 00:05:48.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8322' May 30 00:05:48.465: INFO: stderr: "No resources found in kubectl-8322 namespace.\n" May 30 00:05:48.465: INFO: stdout: "" May 30 00:05:48.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8322 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 30 00:05:48.570: INFO: stderr: "" May 30 00:05:48.570: INFO: stdout: "update-demo-nautilus-nxbnr\nupdate-demo-nautilus-wjxkx\n" May 30 00:05:49.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8322' May 30 00:05:49.262: INFO: stderr: "No resources found in kubectl-8322 namespace.\n" May 30 00:05:49.262: INFO: stdout: "" May 30 00:05:49.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8322 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 30 00:05:49.378: INFO: stderr: "" May 30 00:05:49.378: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:05:49.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8322" for this suite. • [SLOW TEST:20.946 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":70,"skipped":1053,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:05:49.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 30 00:05:53.977: INFO: Successfully updated pod "pod-update-activedeadlineseconds-06d0e0a3-071e-4570-aeef-9fc57f162389" May 30 00:05:53.977: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-06d0e0a3-071e-4570-aeef-9fc57f162389" in namespace "pods-4944" to be "terminated due to deadline exceeded" May 30 00:05:53.998: INFO: Pod "pod-update-activedeadlineseconds-06d0e0a3-071e-4570-aeef-9fc57f162389": Phase="Running", Reason="", readiness=true. Elapsed: 20.648532ms May 30 00:05:56.003: INFO: Pod "pod-update-activedeadlineseconds-06d0e0a3-071e-4570-aeef-9fc57f162389": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.025226594s May 30 00:05:56.003: INFO: Pod "pod-update-activedeadlineseconds-06d0e0a3-071e-4570-aeef-9fc57f162389" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:05:56.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4944" for this suite. • [SLOW TEST:6.627 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":71,"skipped":1064,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:05:56.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:06:12.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3395" for this suite. • [SLOW TEST:16.296 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":72,"skipped":1064,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:06:12.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 00:06:12.866: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 00:06:14.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393972, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393972, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393973, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726393972, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 00:06:17.978: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:06:30.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8070" for this suite. STEP: Destroying namespace "webhook-8070-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.026 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":73,"skipped":1067,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:06:30.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:06:41.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8392" for this suite. • [SLOW TEST:11.234 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":74,"skipped":1089,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:06:41.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-795b353e-8638-4302-9c6f-b1790cc53369 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-795b353e-8638-4302-9c6f-b1790cc53369 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:06:47.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-725" for this suite. • [SLOW TEST:6.137 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":75,"skipped":1106,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:06:47.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-6876 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6876 to expose endpoints map[] May 30 00:06:47.878: INFO: Get endpoints failed (18.623908ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 30 00:06:48.882: INFO: successfully validated that service multi-endpoint-test in namespace services-6876 exposes endpoints map[] (1.02320835s elapsed) STEP: Creating pod pod1 in namespace services-6876 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6876 to expose endpoints map[pod1:[100]] May 30 00:06:53.018: INFO: successfully validated that service multi-endpoint-test in namespace services-6876 exposes endpoints map[pod1:[100]] (4.127696484s elapsed) STEP: Creating pod pod2 in namespace services-6876 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6876 to expose endpoints map[pod1:[100] pod2:[101]] May 30 00:06:57.550: INFO: successfully validated that service multi-endpoint-test in namespace services-6876 exposes endpoints map[pod1:[100] pod2:[101]] (4.527397094s elapsed) STEP: Deleting pod pod1 in namespace services-6876 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6876 to expose endpoints map[pod2:[101]] May 30 00:06:58.629: INFO: successfully validated that service multi-endpoint-test in namespace services-6876 exposes endpoints map[pod2:[101]] (1.074405898s elapsed) STEP: Deleting pod pod2 in namespace services-6876 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6876 to expose endpoints map[] May 30 00:06:58.691: INFO: successfully validated that service multi-endpoint-test in namespace services-6876 exposes endpoints map[] (57.533728ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:06:58.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6876" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.063 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":76,"skipped":1115,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:06:58.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 30 00:07:02.900: INFO: Pod pod-hostip-3b63f83e-ba6f-4a93-91ba-423e1fd7c714 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:07:02.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6647" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":77,"skipped":1123,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:07:02.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 30 00:07:11.165: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 00:07:11.180: INFO: Pod pod-with-prestop-exec-hook still exists May 30 00:07:13.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 00:07:13.191: INFO: Pod pod-with-prestop-exec-hook still exists May 30 00:07:15.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 00:07:15.184: INFO: Pod pod-with-prestop-exec-hook still exists May 30 00:07:17.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 00:07:17.186: INFO: Pod pod-with-prestop-exec-hook still exists May 30 00:07:19.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 00:07:19.186: INFO: Pod pod-with-prestop-exec-hook still exists May 30 00:07:21.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 00:07:21.186: INFO: Pod pod-with-prestop-exec-hook still exists May 30 00:07:23.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 00:07:23.185: INFO: Pod pod-with-prestop-exec-hook still exists May 30 00:07:25.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 00:07:25.186: INFO: Pod pod-with-prestop-exec-hook still exists May 30 00:07:27.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 30 00:07:27.186: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:07:27.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5079" for this suite. • [SLOW TEST:24.291 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":78,"skipped":1131,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:07:27.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-8531/secret-test-28395ab7-a255-4f08-8b9a-1b5ad8a161f9 STEP: Creating a pod to test consume secrets May 30 00:07:27.330: INFO: Waiting up to 5m0s for pod "pod-configmaps-6198f0c9-0dca-4a6a-aaf9-45e277878ca4" in namespace "secrets-8531" to be "Succeeded or Failed" May 30 00:07:27.420: INFO: Pod "pod-configmaps-6198f0c9-0dca-4a6a-aaf9-45e277878ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 89.777753ms May 30 00:07:29.425: INFO: Pod "pod-configmaps-6198f0c9-0dca-4a6a-aaf9-45e277878ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094625222s May 30 00:07:31.428: INFO: Pod "pod-configmaps-6198f0c9-0dca-4a6a-aaf9-45e277878ca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098210729s STEP: Saw pod success May 30 00:07:31.428: INFO: Pod "pod-configmaps-6198f0c9-0dca-4a6a-aaf9-45e277878ca4" satisfied condition "Succeeded or Failed" May 30 00:07:31.431: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-6198f0c9-0dca-4a6a-aaf9-45e277878ca4 container env-test: STEP: delete the pod May 30 00:07:31.536: INFO: Waiting for pod pod-configmaps-6198f0c9-0dca-4a6a-aaf9-45e277878ca4 to disappear May 30 00:07:31.539: INFO: Pod pod-configmaps-6198f0c9-0dca-4a6a-aaf9-45e277878ca4 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:07:31.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8531" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":79,"skipped":1134,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:07:31.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 30 00:07:38.250: INFO: Successfully updated pod "labelsupdatea386b168-e138-4c4e-9384-e9f888168459" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:07:40.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4714" for this suite. • [SLOW TEST:8.753 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":80,"skipped":1137,"failed":0} SS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:07:40.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5273, will wait for the garbage collector to delete the pods May 30 00:07:46.444: INFO: Deleting Job.batch foo took: 6.097624ms May 30 00:07:46.744: INFO: Terminating Job.batch foo pods took: 300.234403ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:08:24.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5273" for this suite. • [SLOW TEST:44.673 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":81,"skipped":1139,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:08:24.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:08:25.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2739" for this suite. STEP: Destroying namespace "nspatchtest-9b13defd-e46e-4d3d-8161-97260cd3ee7d-3500" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":82,"skipped":1146,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:08:25.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0530 00:09:05.989717 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 30 00:09:05.989: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:09:05.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9227" for this suite. • [SLOW TEST:40.821 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":83,"skipped":1159,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:09:05.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8658 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8658 I0530 00:09:06.200075 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8658, replica count: 2 I0530 00:09:09.250549 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:09:12.250806 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 00:09:12.250: INFO: Creating new exec pod May 30 00:09:19.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8658 execpodhcmt2 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 30 00:09:19.572: INFO: stderr: "I0530 00:09:19.432034 1114 log.go:172] (0xc00084c370) (0xc00013b860) Create stream\nI0530 00:09:19.432094 1114 log.go:172] (0xc00084c370) (0xc00013b860) Stream added, broadcasting: 1\nI0530 00:09:19.434917 1114 log.go:172] (0xc00084c370) Reply frame received for 1\nI0530 00:09:19.434961 1114 log.go:172] (0xc00084c370) (0xc0009e8000) Create stream\nI0530 00:09:19.434974 1114 log.go:172] (0xc00084c370) (0xc0009e8000) Stream added, broadcasting: 3\nI0530 00:09:19.436035 1114 log.go:172] (0xc00084c370) Reply frame received for 3\nI0530 00:09:19.436100 1114 log.go:172] (0xc00084c370) (0xc0006c0e60) Create stream\nI0530 00:09:19.436119 1114 log.go:172] (0xc00084c370) (0xc0006c0e60) Stream added, broadcasting: 5\nI0530 00:09:19.437272 1114 log.go:172] (0xc00084c370) Reply frame received for 5\nI0530 00:09:19.543984 1114 log.go:172] (0xc00084c370) Data frame received for 5\nI0530 00:09:19.544166 1114 log.go:172] (0xc0006c0e60) (5) Data frame handling\nI0530 00:09:19.544201 1114 log.go:172] (0xc0006c0e60) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0530 00:09:19.561987 1114 log.go:172] (0xc00084c370) Data frame received for 5\nI0530 00:09:19.562010 1114 log.go:172] (0xc0006c0e60) (5) Data frame handling\nI0530 00:09:19.562034 1114 log.go:172] (0xc0006c0e60) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0530 00:09:19.562106 1114 log.go:172] (0xc00084c370) Data frame received for 3\nI0530 00:09:19.562128 1114 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0530 00:09:19.562345 1114 log.go:172] (0xc00084c370) Data frame received for 5\nI0530 00:09:19.562362 1114 log.go:172] (0xc0006c0e60) (5) Data frame handling\nI0530 00:09:19.564356 1114 log.go:172] (0xc00084c370) Data frame received for 1\nI0530 00:09:19.564393 1114 log.go:172] (0xc00013b860) (1) Data frame handling\nI0530 00:09:19.564415 1114 log.go:172] (0xc00013b860) (1) Data frame sent\nI0530 00:09:19.564434 1114 log.go:172] (0xc00084c370) (0xc00013b860) Stream removed, broadcasting: 1\nI0530 00:09:19.564460 1114 log.go:172] (0xc00084c370) Go away received\nI0530 00:09:19.564927 1114 log.go:172] (0xc00084c370) (0xc00013b860) Stream removed, broadcasting: 1\nI0530 00:09:19.564952 1114 log.go:172] (0xc00084c370) (0xc0009e8000) Stream removed, broadcasting: 3\nI0530 00:09:19.564966 1114 log.go:172] (0xc00084c370) (0xc0006c0e60) Stream removed, broadcasting: 5\n" May 30 00:09:19.572: INFO: stdout: "" May 30 00:09:19.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8658 execpodhcmt2 -- /bin/sh -x -c nc -zv -t -w 2 10.108.86.103 80' May 30 00:09:19.819: INFO: stderr: "I0530 00:09:19.736606 1136 log.go:172] (0xc000a74000) (0xc0006cc140) Create stream\nI0530 00:09:19.736676 1136 log.go:172] (0xc000a74000) (0xc0006cc140) Stream added, broadcasting: 1\nI0530 00:09:19.739780 1136 log.go:172] (0xc000a74000) Reply frame received for 1\nI0530 00:09:19.739816 1136 log.go:172] (0xc000a74000) (0xc0006cd540) Create stream\nI0530 00:09:19.739826 1136 log.go:172] (0xc000a74000) (0xc0006cd540) Stream added, broadcasting: 3\nI0530 00:09:19.740616 1136 log.go:172] (0xc000a74000) Reply frame received for 3\nI0530 00:09:19.740647 1136 log.go:172] (0xc000a74000) (0xc0006b6000) Create stream\nI0530 00:09:19.740661 1136 log.go:172] (0xc000a74000) (0xc0006b6000) Stream added, broadcasting: 5\nI0530 00:09:19.741822 1136 log.go:172] (0xc000a74000) Reply frame received for 5\nI0530 00:09:19.811703 1136 log.go:172] (0xc000a74000) Data frame received for 3\nI0530 00:09:19.811741 1136 log.go:172] (0xc0006cd540) (3) Data frame handling\nI0530 00:09:19.811955 1136 log.go:172] (0xc000a74000) Data frame received for 5\nI0530 00:09:19.811973 1136 log.go:172] (0xc0006b6000) (5) Data frame handling\nI0530 00:09:19.811988 1136 log.go:172] (0xc0006b6000) (5) Data frame sent\nI0530 00:09:19.811996 1136 log.go:172] (0xc000a74000) Data frame received for 5\nI0530 00:09:19.812006 1136 log.go:172] (0xc0006b6000) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.86.103 80\nConnection to 10.108.86.103 80 port [tcp/http] succeeded!\nI0530 00:09:19.813713 1136 log.go:172] (0xc000a74000) Data frame received for 1\nI0530 00:09:19.813745 1136 log.go:172] (0xc0006cc140) (1) Data frame handling\nI0530 00:09:19.813775 1136 log.go:172] (0xc0006cc140) (1) Data frame sent\nI0530 00:09:19.813828 1136 log.go:172] (0xc000a74000) (0xc0006cc140) Stream removed, broadcasting: 1\nI0530 00:09:19.813855 1136 log.go:172] (0xc000a74000) Go away received\nI0530 00:09:19.814138 1136 log.go:172] (0xc000a74000) (0xc0006cc140) Stream removed, broadcasting: 1\nI0530 00:09:19.814152 1136 log.go:172] (0xc000a74000) (0xc0006cd540) Stream removed, broadcasting: 3\nI0530 00:09:19.814159 1136 log.go:172] (0xc000a74000) (0xc0006b6000) Stream removed, broadcasting: 5\n" May 30 00:09:19.819: INFO: stdout: "" May 30 00:09:19.819: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:09:19.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8658" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:13.880 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":84,"skipped":1177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:09:19.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 30 00:09:24.551: INFO: Successfully updated pod "labelsupdated7ad9fd8-d3cd-4917-9d32-ea2a81f2478c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:09:26.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3296" for this suite. • [SLOW TEST:6.705 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":85,"skipped":1206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:09:26.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9581 STEP: creating service affinity-clusterip-transition in namespace services-9581 STEP: creating replication controller affinity-clusterip-transition in namespace services-9581 I0530 00:09:26.736737 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-9581, replica count: 3 I0530 00:09:29.787174 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:09:32.787449 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:09:35.787729 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 00:09:35.795: INFO: Creating new exec pod May 30 00:09:40.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9581 execpod-affinitytg5cb -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 30 00:09:41.092: INFO: stderr: "I0530 00:09:40.968590 1156 log.go:172] (0xc00003a420) (0xc000307d60) Create stream\nI0530 00:09:40.968651 1156 log.go:172] (0xc00003a420) (0xc000307d60) Stream added, broadcasting: 1\nI0530 00:09:40.971128 1156 log.go:172] (0xc00003a420) Reply frame received for 1\nI0530 00:09:40.971165 1156 log.go:172] (0xc00003a420) (0xc0004445a0) Create stream\nI0530 00:09:40.971177 1156 log.go:172] (0xc00003a420) (0xc0004445a0) Stream added, broadcasting: 3\nI0530 00:09:40.971910 1156 log.go:172] (0xc00003a420) Reply frame received for 3\nI0530 00:09:40.971949 1156 log.go:172] (0xc00003a420) (0xc00058adc0) Create stream\nI0530 00:09:40.971963 1156 log.go:172] (0xc00003a420) (0xc00058adc0) Stream added, broadcasting: 5\nI0530 00:09:40.972672 1156 log.go:172] (0xc00003a420) Reply frame received for 5\nI0530 00:09:41.061344 1156 log.go:172] (0xc00003a420) Data frame received for 5\nI0530 00:09:41.061373 1156 log.go:172] (0xc00058adc0) (5) Data frame handling\nI0530 00:09:41.061396 1156 log.go:172] (0xc00058adc0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0530 00:09:41.084691 1156 log.go:172] (0xc00003a420) Data frame received for 5\nI0530 00:09:41.084713 1156 log.go:172] (0xc00058adc0) (5) Data frame handling\nI0530 00:09:41.084727 1156 log.go:172] (0xc00058adc0) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0530 00:09:41.084977 1156 log.go:172] (0xc00003a420) Data frame received for 3\nI0530 00:09:41.084994 1156 log.go:172] (0xc0004445a0) (3) Data frame handling\nI0530 00:09:41.085073 1156 log.go:172] (0xc00003a420) Data frame received for 5\nI0530 00:09:41.085089 1156 log.go:172] (0xc00058adc0) (5) Data frame handling\nI0530 00:09:41.087186 1156 log.go:172] (0xc00003a420) Data frame received for 1\nI0530 00:09:41.087198 1156 log.go:172] (0xc000307d60) (1) Data frame handling\nI0530 00:09:41.087209 1156 log.go:172] (0xc000307d60) (1) Data frame sent\nI0530 00:09:41.087218 1156 log.go:172] (0xc00003a420) (0xc000307d60) Stream removed, broadcasting: 1\nI0530 00:09:41.087442 1156 log.go:172] (0xc00003a420) Go away received\nI0530 00:09:41.087485 1156 log.go:172] (0xc00003a420) (0xc000307d60) Stream removed, broadcasting: 1\nI0530 00:09:41.087500 1156 log.go:172] (0xc00003a420) (0xc0004445a0) Stream removed, broadcasting: 3\nI0530 00:09:41.087508 1156 log.go:172] (0xc00003a420) (0xc00058adc0) Stream removed, broadcasting: 5\n" May 30 00:09:41.092: INFO: stdout: "" May 30 00:09:41.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9581 execpod-affinitytg5cb -- /bin/sh -x -c nc -zv -t -w 2 10.105.114.24 80' May 30 00:09:41.302: INFO: stderr: "I0530 00:09:41.228366 1176 log.go:172] (0xc000af6c60) (0xc0004f1cc0) Create stream\nI0530 00:09:41.228408 1176 log.go:172] (0xc000af6c60) (0xc0004f1cc0) Stream added, broadcasting: 1\nI0530 00:09:41.230487 1176 log.go:172] (0xc000af6c60) Reply frame received for 1\nI0530 00:09:41.230514 1176 log.go:172] (0xc000af6c60) (0xc000238140) Create stream\nI0530 00:09:41.230521 1176 log.go:172] (0xc000af6c60) (0xc000238140) Stream added, broadcasting: 3\nI0530 00:09:41.231349 1176 log.go:172] (0xc000af6c60) Reply frame received for 3\nI0530 00:09:41.231389 1176 log.go:172] (0xc000af6c60) (0xc00012a0a0) Create stream\nI0530 00:09:41.231402 1176 log.go:172] (0xc000af6c60) (0xc00012a0a0) Stream added, broadcasting: 5\nI0530 00:09:41.232117 1176 log.go:172] (0xc000af6c60) Reply frame received for 5\nI0530 00:09:41.295251 1176 log.go:172] (0xc000af6c60) Data frame received for 3\nI0530 00:09:41.295292 1176 log.go:172] (0xc000238140) (3) Data frame handling\nI0530 00:09:41.295332 1176 log.go:172] (0xc000af6c60) Data frame received for 5\nI0530 00:09:41.295364 1176 log.go:172] (0xc00012a0a0) (5) Data frame handling\nI0530 00:09:41.295386 1176 log.go:172] (0xc00012a0a0) (5) Data frame sent\nI0530 00:09:41.295403 1176 log.go:172] (0xc000af6c60) Data frame received for 5\nI0530 00:09:41.295413 1176 log.go:172] (0xc00012a0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.114.24 80\nConnection to 10.105.114.24 80 port [tcp/http] succeeded!\nI0530 00:09:41.296605 1176 log.go:172] (0xc000af6c60) Data frame received for 1\nI0530 00:09:41.296628 1176 log.go:172] (0xc0004f1cc0) (1) Data frame handling\nI0530 00:09:41.296643 1176 log.go:172] (0xc0004f1cc0) (1) Data frame sent\nI0530 00:09:41.296664 1176 log.go:172] (0xc000af6c60) (0xc0004f1cc0) Stream removed, broadcasting: 1\nI0530 00:09:41.296694 1176 log.go:172] (0xc000af6c60) Go away received\nI0530 00:09:41.296920 1176 log.go:172] (0xc000af6c60) (0xc0004f1cc0) Stream removed, broadcasting: 1\nI0530 00:09:41.296936 1176 log.go:172] (0xc000af6c60) (0xc000238140) Stream removed, broadcasting: 3\nI0530 00:09:41.296942 1176 log.go:172] (0xc000af6c60) (0xc00012a0a0) Stream removed, broadcasting: 5\n" May 30 00:09:41.302: INFO: stdout: "" May 30 00:09:41.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9581 execpod-affinitytg5cb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.114.24:80/ ; done' May 30 00:09:41.683: INFO: stderr: "I0530 00:09:41.456707 1196 log.go:172] (0xc00097d600) (0xc00067e5a0) Create stream\nI0530 00:09:41.456771 1196 log.go:172] (0xc00097d600) (0xc00067e5a0) Stream added, broadcasting: 1\nI0530 00:09:41.460381 1196 log.go:172] (0xc00097d600) Reply frame received for 1\nI0530 00:09:41.460430 1196 log.go:172] (0xc00097d600) (0xc0006965a0) Create stream\nI0530 00:09:41.460445 1196 log.go:172] (0xc00097d600) (0xc0006965a0) Stream added, broadcasting: 3\nI0530 00:09:41.461671 1196 log.go:172] (0xc00097d600) Reply frame received for 3\nI0530 00:09:41.461707 1196 log.go:172] (0xc00097d600) (0xc00067eaa0) Create stream\nI0530 00:09:41.461717 1196 log.go:172] (0xc00097d600) (0xc00067eaa0) Stream added, broadcasting: 5\nI0530 00:09:41.462546 1196 log.go:172] (0xc00097d600) Reply frame received for 5\nI0530 00:09:41.531724 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.531775 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.531801 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.531844 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.531859 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.531885 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.576558 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.576586 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.576603 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.577295 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.577312 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.577324 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.577471 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.577491 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.577501 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.584678 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.584698 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.584717 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.585749 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.585827 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.585853 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.585885 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.585897 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.585914 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.594520 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.594542 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.594559 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.594933 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.594946 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.594953 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.595140 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.595155 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.595166 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.600377 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.600397 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.600408 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.600783 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.600794 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.600800 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.600817 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.600837 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.600850 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.605699 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.605715 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.605728 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.606194 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.606227 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.606250 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.606260 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.606275 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.606285 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.611775 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.611819 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.611835 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.612140 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.612168 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.612180 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.612204 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.612220 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.612230 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.616960 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.616979 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.616995 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.617772 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.617792 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.617804 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\nI0530 00:09:41.617827 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.617852 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.617867 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.625420 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.625444 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.625463 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.625995 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.626018 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.626042 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.626117 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.626133 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.626149 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.629975 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.629995 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.630017 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.630411 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.630432 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.630454 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\nI0530 00:09:41.630465 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.630474 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.630505 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\nI0530 00:09:41.630521 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.630530 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.630545 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.637927 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.637969 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.637999 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.638442 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.638472 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.638487 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.638508 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.638522 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.638538 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.642902 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.642917 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.642929 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.643311 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.643329 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.643348 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.643379 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.643402 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.643435 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\nI0530 00:09:41.647993 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.648009 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.648022 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.648287 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.648324 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.648342 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.648374 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.648386 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.648402 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.656603 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.656631 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.656648 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.657080 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.657094 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.657103 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.657334 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.657378 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.657401 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.662230 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.662250 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.662277 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.662663 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.662678 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.662688 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.662725 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.662761 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.662788 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.667811 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.667835 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.667857 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.668465 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.668498 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.668516 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.668542 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.668558 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.668579 1196 log.go:172] (0xc00067eaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.674108 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.674130 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.674155 1196 log.go:172] (0xc0006965a0) (3) Data frame sent\nI0530 00:09:41.674685 1196 log.go:172] (0xc00097d600) Data frame received for 3\nI0530 00:09:41.674707 1196 log.go:172] (0xc0006965a0) (3) Data frame handling\nI0530 00:09:41.674728 1196 log.go:172] (0xc00097d600) Data frame received for 5\nI0530 00:09:41.674761 1196 log.go:172] (0xc00067eaa0) (5) Data frame handling\nI0530 00:09:41.676830 1196 log.go:172] (0xc00097d600) Data frame received for 1\nI0530 00:09:41.676847 1196 log.go:172] (0xc00067e5a0) (1) Data frame handling\nI0530 00:09:41.676870 1196 log.go:172] (0xc00067e5a0) (1) Data frame sent\nI0530 00:09:41.676891 1196 log.go:172] (0xc00097d600) (0xc00067e5a0) Stream removed, broadcasting: 1\nI0530 00:09:41.677559 1196 log.go:172] (0xc00097d600) Go away received\nI0530 00:09:41.677759 1196 log.go:172] (0xc00097d600) (0xc00067e5a0) Stream removed, broadcasting: 1\nI0530 00:09:41.677793 1196 log.go:172] (0xc00097d600) (0xc0006965a0) Stream removed, broadcasting: 3\nI0530 00:09:41.677806 1196 log.go:172] (0xc00097d600) (0xc00067eaa0) Stream removed, broadcasting: 5\n" May 30 00:09:41.684: INFO: stdout: "\naffinity-clusterip-transition-472kg\naffinity-clusterip-transition-cr4kf\naffinity-clusterip-transition-cr4kf\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-472kg\naffinity-clusterip-transition-cr4kf\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-cr4kf\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-472kg\naffinity-clusterip-transition-cr4kf\naffinity-clusterip-transition-cr4kf\naffinity-clusterip-transition-cr4kf\naffinity-clusterip-transition-cr4kf\naffinity-clusterip-transition-472kg" May 30 00:09:41.684: INFO: Received response from host: May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-472kg May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-cr4kf May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-cr4kf May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-472kg May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-cr4kf May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-cr4kf May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-472kg May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-cr4kf May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-cr4kf May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-cr4kf May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-cr4kf May 30 00:09:41.684: INFO: Received response from host: affinity-clusterip-transition-472kg May 30 00:09:41.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9581 execpod-affinitytg5cb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.114.24:80/ ; done' May 30 00:09:41.992: INFO: stderr: "I0530 00:09:41.833817 1215 log.go:172] (0xc00097a000) (0xc000560280) Create stream\nI0530 00:09:41.833895 1215 log.go:172] (0xc00097a000) (0xc000560280) Stream added, broadcasting: 1\nI0530 00:09:41.836606 1215 log.go:172] (0xc00097a000) Reply frame received for 1\nI0530 00:09:41.836649 1215 log.go:172] (0xc00097a000) (0xc000516000) Create stream\nI0530 00:09:41.836661 1215 log.go:172] (0xc00097a000) (0xc000516000) Stream added, broadcasting: 3\nI0530 00:09:41.838016 1215 log.go:172] (0xc00097a000) Reply frame received for 3\nI0530 00:09:41.838081 1215 log.go:172] (0xc00097a000) (0xc000430500) Create stream\nI0530 00:09:41.838116 1215 log.go:172] (0xc00097a000) (0xc000430500) Stream added, broadcasting: 5\nI0530 00:09:41.839091 1215 log.go:172] (0xc00097a000) Reply frame received for 5\nI0530 00:09:41.905414 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.905450 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.905464 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.905503 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.905546 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.905572 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.909000 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.909021 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.909036 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.909446 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.909481 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.909506 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.909752 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.909767 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.909782 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.915465 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.915488 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.915525 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.916094 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.916118 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.916131 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.916156 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.916196 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.916225 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.920803 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.920827 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.920866 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.921104 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.921427 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.921522 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.921568 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.921596 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.921625 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.925444 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.925479 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.925512 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.925822 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.925850 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.925895 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.925990 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.926017 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.926036 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.929556 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.929575 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.929588 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.930489 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.930513 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.930529 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.930555 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.930570 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.930600 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.935390 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.935409 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.935425 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.936020 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.936054 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.936070 1215 log.go:172] (0xc000430500) (5) Data frame sent\nI0530 00:09:41.936080 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.936088 1215 log.go:172] (0xc000430500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.936112 1215 log.go:172] (0xc000430500) (5) Data frame sent\nI0530 00:09:41.936124 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.936133 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.936147 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.944088 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.944115 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.944144 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.944737 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.944763 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.944785 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.944831 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.944843 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.944856 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ echo\nI0530 00:09:41.944868 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.944895 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.944910 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.948255 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.948284 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.948300 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.948610 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.948665 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.948695 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.948753 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.948785 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.948819 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.952258 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.952282 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.952317 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.952972 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.952984 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.952990 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.953011 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.953039 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.953055 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.956341 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.956353 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.956362 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.956893 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.956924 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.956943 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.956972 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.956993 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.957019 1215 log.go:172] (0xc000430500) (5) Data frame sent\nI0530 00:09:41.957036 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.957051 1215 log.go:172] (0xc000430500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.957090 1215 log.go:172] (0xc000430500) (5) Data frame sent\nI0530 00:09:41.961690 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.961711 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.961733 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.961954 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.961970 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.961980 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.961995 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.962007 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.962017 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.965011 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.965041 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.965087 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.965832 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.965871 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.965892 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.965912 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.965929 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.965959 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.968529 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.968570 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.968623 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.968991 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.969016 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.969055 1215 log.go:172] (0xc000430500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.969099 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.969337 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.969366 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.973407 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.973534 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.973563 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.974908 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.974933 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.974970 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.974993 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.975009 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.975021 1215 log.go:172] (0xc000430500) (5) Data frame sent\nI0530 00:09:41.975032 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.975045 1215 log.go:172] (0xc000430500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.975075 1215 log.go:172] (0xc000430500) (5) Data frame sent\nI0530 00:09:41.979734 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.979763 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.979801 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.980179 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.980221 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.980237 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.980259 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.980278 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.980306 1215 log.go:172] (0xc000430500) (5) Data frame sent\nI0530 00:09:41.980323 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.980337 1215 log.go:172] (0xc000430500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.24:80/\nI0530 00:09:41.980366 1215 log.go:172] (0xc000430500) (5) Data frame sent\nI0530 00:09:41.984980 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.984999 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.985016 1215 log.go:172] (0xc000516000) (3) Data frame sent\nI0530 00:09:41.985700 1215 log.go:172] (0xc00097a000) Data frame received for 5\nI0530 00:09:41.985724 1215 log.go:172] (0xc000430500) (5) Data frame handling\nI0530 00:09:41.985919 1215 log.go:172] (0xc00097a000) Data frame received for 3\nI0530 00:09:41.985940 1215 log.go:172] (0xc000516000) (3) Data frame handling\nI0530 00:09:41.987442 1215 log.go:172] (0xc00097a000) Data frame received for 1\nI0530 00:09:41.987468 1215 log.go:172] (0xc000560280) (1) Data frame handling\nI0530 00:09:41.987483 1215 log.go:172] (0xc000560280) (1) Data frame sent\nI0530 00:09:41.987503 1215 log.go:172] (0xc00097a000) (0xc000560280) Stream removed, broadcasting: 1\nI0530 00:09:41.987530 1215 log.go:172] (0xc00097a000) Go away received\nI0530 00:09:41.987904 1215 log.go:172] (0xc00097a000) (0xc000560280) Stream removed, broadcasting: 1\nI0530 00:09:41.987921 1215 log.go:172] (0xc00097a000) (0xc000516000) Stream removed, broadcasting: 3\nI0530 00:09:41.987930 1215 log.go:172] (0xc00097a000) (0xc000430500) Stream removed, broadcasting: 5\n" May 30 00:09:41.993: INFO: stdout: "\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp\naffinity-clusterip-transition-ckmdp" May 30 00:09:41.993: INFO: Received response from host: May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Received response from host: affinity-clusterip-transition-ckmdp May 30 00:09:41.993: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-9581, will wait for the garbage collector to delete the pods May 30 00:09:42.754: INFO: Deleting ReplicationController affinity-clusterip-transition took: 236.805759ms May 30 00:09:43.054: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 300.268491ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:09:55.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9581" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.805 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":86,"skipped":1239,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:09:55.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:09:59.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4259" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":87,"skipped":1247,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:09:59.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8b46b8d7-97af-4ebd-92f5-74a0ec43ed22 STEP: Creating a pod to test consume secrets May 30 00:09:59.993: INFO: Waiting up to 5m0s for pod "pod-secrets-732b01bb-bff7-412c-b6f0-1992011430cf" in namespace "secrets-4596" to be "Succeeded or Failed" May 30 00:10:00.035: INFO: Pod "pod-secrets-732b01bb-bff7-412c-b6f0-1992011430cf": Phase="Pending", Reason="", readiness=false. Elapsed: 41.897124ms May 30 00:10:02.133: INFO: Pod "pod-secrets-732b01bb-bff7-412c-b6f0-1992011430cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139807704s May 30 00:10:04.139: INFO: Pod "pod-secrets-732b01bb-bff7-412c-b6f0-1992011430cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1456827s STEP: Saw pod success May 30 00:10:04.139: INFO: Pod "pod-secrets-732b01bb-bff7-412c-b6f0-1992011430cf" satisfied condition "Succeeded or Failed" May 30 00:10:04.141: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-732b01bb-bff7-412c-b6f0-1992011430cf container secret-volume-test: STEP: delete the pod May 30 00:10:04.241: INFO: Waiting for pod pod-secrets-732b01bb-bff7-412c-b6f0-1992011430cf to disappear May 30 00:10:04.246: INFO: Pod pod-secrets-732b01bb-bff7-412c-b6f0-1992011430cf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:10:04.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4596" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":88,"skipped":1252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:10:04.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 30 00:10:04.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2579' May 30 00:10:04.488: INFO: stderr: "" May 30 00:10:04.488: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 30 00:10:09.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2579 -o json' May 30 00:10:09.657: INFO: stderr: "" May 30 00:10:09.657: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-30T00:10:04Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-30T00:10:04Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.107\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-30T00:10:07Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2579\",\n \"resourceVersion\": \"8736406\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2579/pods/e2e-test-httpd-pod\",\n \"uid\": \"d942b6be-a587-48af-98ea-0638607a0ec9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-6bg9j\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-6bg9j\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-6bg9j\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-30T00:10:04Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-30T00:10:07Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-30T00:10:07Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-30T00:10:04Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f79bdb78511cdc623ae798e7c2068ad71eef6964ea69abadf17b55c9a5367cfe\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-30T00:10:06Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.107\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.107\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-30T00:10:04Z\"\n }\n}\n" STEP: replace the image in the pod May 30 00:10:09.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2579' May 30 00:10:10.560: INFO: stderr: "" May 30 00:10:10.560: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 30 00:10:10.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2579' May 30 00:10:25.262: INFO: stderr: "" May 30 00:10:25.262: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:10:25.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2579" for this suite. • [SLOW TEST:21.059 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":89,"skipped":1281,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:10:25.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 00:10:26.056: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 00:10:28.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726394226, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726394226, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726394226, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726394226, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 00:10:31.111: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:10:31.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-836" for this suite. STEP: Destroying namespace "webhook-836-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.042 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":90,"skipped":1284,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:10:31.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 00:10:31.999: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 00:10:34.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726394232, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726394232, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726394232, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726394231, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 00:10:37.046: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:10:37.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9043" for this suite. STEP: Destroying namespace "webhook-9043-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.934 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":91,"skipped":1294,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:10:37.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 30 00:10:41.426: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:10:41.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1711" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":92,"skipped":1311,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:10:41.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-2d7ee5f2-65cf-452a-890d-5db87199a0b9 STEP: Creating configMap with name cm-test-opt-upd-f462e777-261d-46af-98e4-94aedfb180ca STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2d7ee5f2-65cf-452a-890d-5db87199a0b9 STEP: Updating configmap cm-test-opt-upd-f462e777-261d-46af-98e4-94aedfb180ca STEP: Creating configMap with name cm-test-opt-create-4d9a0d69-b761-49c1-bc25-c0771e1481a9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:10:51.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8814" for this suite. • [SLOW TEST:10.237 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":93,"skipped":1317,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:10:51.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 30 00:10:51.815: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 30 00:11:02.486: INFO: >>> kubeConfig: /root/.kube/config May 30 00:11:05.462: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:11:15.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6028" for this suite. • [SLOW TEST:24.111 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":94,"skipped":1329,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:11:15.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:11:15.906: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b15f2801-ebe3-48d6-bb35-8ac865b9213f" in namespace "downward-api-1093" to be "Succeeded or Failed" May 30 00:11:15.955: INFO: Pod "downwardapi-volume-b15f2801-ebe3-48d6-bb35-8ac865b9213f": Phase="Pending", Reason="", readiness=false. Elapsed: 48.4315ms May 30 00:11:18.027: INFO: Pod "downwardapi-volume-b15f2801-ebe3-48d6-bb35-8ac865b9213f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12045541s May 30 00:11:20.032: INFO: Pod "downwardapi-volume-b15f2801-ebe3-48d6-bb35-8ac865b9213f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125575316s STEP: Saw pod success May 30 00:11:20.032: INFO: Pod "downwardapi-volume-b15f2801-ebe3-48d6-bb35-8ac865b9213f" satisfied condition "Succeeded or Failed" May 30 00:11:20.035: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b15f2801-ebe3-48d6-bb35-8ac865b9213f container client-container: STEP: delete the pod May 30 00:11:20.134: INFO: Waiting for pod downwardapi-volume-b15f2801-ebe3-48d6-bb35-8ac865b9213f to disappear May 30 00:11:20.139: INFO: Pod downwardapi-volume-b15f2801-ebe3-48d6-bb35-8ac865b9213f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:11:20.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1093" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":95,"skipped":1350,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:11:20.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 30 00:11:20.316: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-configmap-a b80df0fd-f88a-4acb-9bd8-82853a164a62 8736910 0 2020-05-30 00:11:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-30 00:11:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 30 00:11:20.316: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-configmap-a b80df0fd-f88a-4acb-9bd8-82853a164a62 8736910 0 2020-05-30 00:11:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-30 00:11:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 30 00:11:30.335: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-configmap-a b80df0fd-f88a-4acb-9bd8-82853a164a62 8736954 0 2020-05-30 00:11:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-30 00:11:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 30 00:11:30.335: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-configmap-a b80df0fd-f88a-4acb-9bd8-82853a164a62 8736954 0 2020-05-30 00:11:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-30 00:11:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 30 00:11:40.342: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-configmap-a b80df0fd-f88a-4acb-9bd8-82853a164a62 8736982 0 2020-05-30 00:11:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-30 00:11:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 30 00:11:40.342: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-configmap-a b80df0fd-f88a-4acb-9bd8-82853a164a62 8736982 0 2020-05-30 00:11:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-30 00:11:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 30 00:11:50.377: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-configmap-a b80df0fd-f88a-4acb-9bd8-82853a164a62 8737012 0 2020-05-30 00:11:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-30 00:11:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 30 00:11:50.377: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-configmap-a b80df0fd-f88a-4acb-9bd8-82853a164a62 8737012 0 2020-05-30 00:11:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-30 00:11:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 30 00:12:00.407: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-configmap-b 2ce89af4-0546-43bb-ae9c-0d3ad35e296c 8737042 0 2020-05-30 00:12:00 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-30 00:12:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 30 00:12:00.407: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-configmap-b 2ce89af4-0546-43bb-ae9c-0d3ad35e296c 8737042 0 2020-05-30 00:12:00 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-30 00:12:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 30 00:12:10.439: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-configmap-b 2ce89af4-0546-43bb-ae9c-0d3ad35e296c 8737072 0 2020-05-30 00:12:00 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-30 00:12:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 30 00:12:10.439: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-configmap-b 2ce89af4-0546-43bb-ae9c-0d3ad35e296c 8737072 0 2020-05-30 00:12:00 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-30 00:12:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:12:20.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2163" for this suite. • [SLOW TEST:60.303 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":96,"skipped":1368,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:12:20.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 30 00:12:20.548: INFO: Waiting up to 5m0s for pod "pod-be656a8a-3c16-4366-b682-12bf6b37aaff" in namespace "emptydir-8153" to be "Succeeded or Failed" May 30 00:12:20.565: INFO: Pod "pod-be656a8a-3c16-4366-b682-12bf6b37aaff": Phase="Pending", Reason="", readiness=false. Elapsed: 17.873615ms May 30 00:12:22.570: INFO: Pod "pod-be656a8a-3c16-4366-b682-12bf6b37aaff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022780739s May 30 00:12:24.575: INFO: Pod "pod-be656a8a-3c16-4366-b682-12bf6b37aaff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027190784s STEP: Saw pod success May 30 00:12:24.575: INFO: Pod "pod-be656a8a-3c16-4366-b682-12bf6b37aaff" satisfied condition "Succeeded or Failed" May 30 00:12:24.579: INFO: Trying to get logs from node latest-worker2 pod pod-be656a8a-3c16-4366-b682-12bf6b37aaff container test-container: STEP: delete the pod May 30 00:12:24.779: INFO: Waiting for pod pod-be656a8a-3c16-4366-b682-12bf6b37aaff to disappear May 30 00:12:24.803: INFO: Pod pod-be656a8a-3c16-4366-b682-12bf6b37aaff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:12:24.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8153" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":97,"skipped":1372,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:12:24.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-153.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-153.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-153.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-153.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-153.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-153.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 00:12:31.047: INFO: DNS probes using dns-153/dns-test-5f8cb9b8-1035-4d45-b8a2-9fb057869fd9 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:12:31.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-153" for this suite. • [SLOW TEST:6.869 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":98,"skipped":1376,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:12:31.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-7cff3dd4-9809-4a0c-9090-021817a2ff2a STEP: Creating secret with name s-test-opt-upd-b43683fe-2627-4240-af3d-4f33a0d55e42 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-7cff3dd4-9809-4a0c-9090-021817a2ff2a STEP: Updating secret s-test-opt-upd-b43683fe-2627-4240-af3d-4f33a0d55e42 STEP: Creating secret with name s-test-opt-create-5d79d911-0f6b-4efd-8f21-dc628e63b0c2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:14:06.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5481" for this suite. • [SLOW TEST:95.208 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":99,"skipped":1390,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:14:06.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-80d01038-c817-4e83-ad42-09fcc567b7c7 STEP: Creating a pod to test consume configMaps May 30 00:14:07.044: INFO: Waiting up to 5m0s for pod "pod-configmaps-9300cecb-d1a7-4ced-bb22-00d13e946db6" in namespace "configmap-3746" to be "Succeeded or Failed" May 30 00:14:07.125: INFO: Pod "pod-configmaps-9300cecb-d1a7-4ced-bb22-00d13e946db6": Phase="Pending", Reason="", readiness=false. Elapsed: 80.369416ms May 30 00:14:09.130: INFO: Pod "pod-configmaps-9300cecb-d1a7-4ced-bb22-00d13e946db6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085273789s May 30 00:14:11.134: INFO: Pod "pod-configmaps-9300cecb-d1a7-4ced-bb22-00d13e946db6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089995469s STEP: Saw pod success May 30 00:14:11.134: INFO: Pod "pod-configmaps-9300cecb-d1a7-4ced-bb22-00d13e946db6" satisfied condition "Succeeded or Failed" May 30 00:14:11.138: INFO: Trying to get logs from node latest-worker pod pod-configmaps-9300cecb-d1a7-4ced-bb22-00d13e946db6 container configmap-volume-test: STEP: delete the pod May 30 00:14:11.188: INFO: Waiting for pod pod-configmaps-9300cecb-d1a7-4ced-bb22-00d13e946db6 to disappear May 30 00:14:11.194: INFO: Pod pod-configmaps-9300cecb-d1a7-4ced-bb22-00d13e946db6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:14:11.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3746" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":100,"skipped":1392,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:14:11.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 30 00:14:11.267: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 30 00:14:11.282: INFO: Waiting for terminating namespaces to be deleted... May 30 00:14:11.285: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 30 00:14:11.290: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 30 00:14:11.290: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 30 00:14:11.290: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 30 00:14:11.290: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 30 00:14:11.290: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 30 00:14:11.290: INFO: Container kindnet-cni ready: true, restart count 2 May 30 00:14:11.290: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 30 00:14:11.290: INFO: Container kube-proxy ready: true, restart count 0 May 30 00:14:11.290: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 30 00:14:11.296: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 30 00:14:11.296: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 30 00:14:11.296: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 30 00:14:11.296: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 30 00:14:11.296: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 30 00:14:11.296: INFO: Container kindnet-cni ready: true, restart count 2 May 30 00:14:11.296: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 30 00:14:11.296: INFO: Container kube-proxy ready: true, restart count 0 May 30 00:14:11.296: INFO: pod-projected-secrets-8ffdc480-b0d3-4036-8d2e-ab8ca7195d86 from projected-5481 started at 2020-05-30 00:12:32 +0000 UTC (3 container statuses recorded) May 30 00:14:11.296: INFO: Container creates-volume-test ready: true, restart count 0 May 30 00:14:11.296: INFO: Container dels-volume-test ready: true, restart count 0 May 30 00:14:11.296: INFO: Container upds-volume-test ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 30 00:14:11.386: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 30 00:14:11.386: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 May 30 00:14:11.386: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 30 00:14:11.386: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 30 00:14:11.386: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 30 00:14:11.386: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 May 30 00:14:11.386: INFO: Pod pod-projected-secrets-8ffdc480-b0d3-4036-8d2e-ab8ca7195d86 requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 30 00:14:11.386: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 30 00:14:11.392: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-70a8cd07-e0f9-428f-ada8-ff236796378b.1613a6091bd98517], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7075/filler-pod-70a8cd07-e0f9-428f-ada8-ff236796378b to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-70a8cd07-e0f9-428f-ada8-ff236796378b.1613a609b04ae98b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-70a8cd07-e0f9-428f-ada8-ff236796378b.1613a60a17c63d10], Reason = [Created], Message = [Created container filler-pod-70a8cd07-e0f9-428f-ada8-ff236796378b] STEP: Considering event: Type = [Normal], Name = [filler-pod-70a8cd07-e0f9-428f-ada8-ff236796378b.1613a60a26d976c6], Reason = [Started], Message = [Started container filler-pod-70a8cd07-e0f9-428f-ada8-ff236796378b] STEP: Considering event: Type = [Normal], Name = [filler-pod-a5ad4d56-0137-45dd-ab3f-4bc19b357470.1613a6091a7a7108], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7075/filler-pod-a5ad4d56-0137-45dd-ab3f-4bc19b357470 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-a5ad4d56-0137-45dd-ab3f-4bc19b357470.1613a6097a247b36], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a5ad4d56-0137-45dd-ab3f-4bc19b357470.1613a609fb7894e2], Reason = [Created], Message = [Created container filler-pod-a5ad4d56-0137-45dd-ab3f-4bc19b357470] STEP: Considering event: Type = [Normal], Name = [filler-pod-a5ad4d56-0137-45dd-ab3f-4bc19b357470.1613a60a120d62cb], Reason = [Started], Message = [Started container filler-pod-a5ad4d56-0137-45dd-ab3f-4bc19b357470] STEP: Considering event: Type = [Warning], Name = [additional-pod.1613a60a8348d262], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1613a60a86d73309], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:14:18.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7075" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.350 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":101,"skipped":1468,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:14:18.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 30 00:14:18.676: INFO: Waiting up to 5m0s for pod "pod-4a10255c-419e-4c58-8a13-85eeedf27a75" in namespace "emptydir-5488" to be "Succeeded or Failed" May 30 00:14:18.728: INFO: Pod "pod-4a10255c-419e-4c58-8a13-85eeedf27a75": Phase="Pending", Reason="", readiness=false. Elapsed: 51.447579ms May 30 00:14:20.732: INFO: Pod "pod-4a10255c-419e-4c58-8a13-85eeedf27a75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055581248s May 30 00:14:22.736: INFO: Pod "pod-4a10255c-419e-4c58-8a13-85eeedf27a75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059741137s STEP: Saw pod success May 30 00:14:22.736: INFO: Pod "pod-4a10255c-419e-4c58-8a13-85eeedf27a75" satisfied condition "Succeeded or Failed" May 30 00:14:22.739: INFO: Trying to get logs from node latest-worker pod pod-4a10255c-419e-4c58-8a13-85eeedf27a75 container test-container: STEP: delete the pod May 30 00:14:22.767: INFO: Waiting for pod pod-4a10255c-419e-4c58-8a13-85eeedf27a75 to disappear May 30 00:14:22.782: INFO: Pod pod-4a10255c-419e-4c58-8a13-85eeedf27a75 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:14:22.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5488" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":102,"skipped":1473,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:14:22.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:14:22.929: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8af1760a-964d-461e-a050-0410f448af1b" in namespace "downward-api-1982" to be "Succeeded or Failed" May 30 00:14:22.932: INFO: Pod "downwardapi-volume-8af1760a-964d-461e-a050-0410f448af1b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.064628ms May 30 00:14:24.941: INFO: Pod "downwardapi-volume-8af1760a-964d-461e-a050-0410f448af1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012404005s May 30 00:14:26.945: INFO: Pod "downwardapi-volume-8af1760a-964d-461e-a050-0410f448af1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016725022s May 30 00:14:28.950: INFO: Pod "downwardapi-volume-8af1760a-964d-461e-a050-0410f448af1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021501494s STEP: Saw pod success May 30 00:14:28.950: INFO: Pod "downwardapi-volume-8af1760a-964d-461e-a050-0410f448af1b" satisfied condition "Succeeded or Failed" May 30 00:14:28.954: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8af1760a-964d-461e-a050-0410f448af1b container client-container: STEP: delete the pod May 30 00:14:29.030: INFO: Waiting for pod downwardapi-volume-8af1760a-964d-461e-a050-0410f448af1b to disappear May 30 00:14:29.036: INFO: Pod downwardapi-volume-8af1760a-964d-461e-a050-0410f448af1b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:14:29.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1982" for this suite. • [SLOW TEST:6.254 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":103,"skipped":1480,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:14:29.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1476/configmap-test-8c1cbc61-1d5a-4ec7-ba89-ec744dd9e325 STEP: Creating a pod to test consume configMaps May 30 00:14:29.249: INFO: Waiting up to 5m0s for pod "pod-configmaps-6e1507c9-6090-46d5-aa18-b8b6c8816349" in namespace "configmap-1476" to be "Succeeded or Failed" May 30 00:14:29.383: INFO: Pod "pod-configmaps-6e1507c9-6090-46d5-aa18-b8b6c8816349": Phase="Pending", Reason="", readiness=false. Elapsed: 133.58927ms May 30 00:14:31.387: INFO: Pod "pod-configmaps-6e1507c9-6090-46d5-aa18-b8b6c8816349": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137886273s May 30 00:14:33.391: INFO: Pod "pod-configmaps-6e1507c9-6090-46d5-aa18-b8b6c8816349": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141901745s STEP: Saw pod success May 30 00:14:33.391: INFO: Pod "pod-configmaps-6e1507c9-6090-46d5-aa18-b8b6c8816349" satisfied condition "Succeeded or Failed" May 30 00:14:33.393: INFO: Trying to get logs from node latest-worker pod pod-configmaps-6e1507c9-6090-46d5-aa18-b8b6c8816349 container env-test: STEP: delete the pod May 30 00:14:33.558: INFO: Waiting for pod pod-configmaps-6e1507c9-6090-46d5-aa18-b8b6c8816349 to disappear May 30 00:14:33.705: INFO: Pod pod-configmaps-6e1507c9-6090-46d5-aa18-b8b6c8816349 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:14:33.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1476" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":104,"skipped":1497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:14:33.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:14:33.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6087" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":105,"skipped":1534,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:14:33.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:14:34.025: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2059c988-5899-4a77-88de-d8fe946ed390" in namespace "projected-1242" to be "Succeeded or Failed" May 30 00:14:34.043: INFO: Pod "downwardapi-volume-2059c988-5899-4a77-88de-d8fe946ed390": Phase="Pending", Reason="", readiness=false. Elapsed: 17.923419ms May 30 00:14:36.053: INFO: Pod "downwardapi-volume-2059c988-5899-4a77-88de-d8fe946ed390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027763291s May 30 00:14:38.057: INFO: Pod "downwardapi-volume-2059c988-5899-4a77-88de-d8fe946ed390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031873659s STEP: Saw pod success May 30 00:14:38.057: INFO: Pod "downwardapi-volume-2059c988-5899-4a77-88de-d8fe946ed390" satisfied condition "Succeeded or Failed" May 30 00:14:38.060: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2059c988-5899-4a77-88de-d8fe946ed390 container client-container: STEP: delete the pod May 30 00:14:38.178: INFO: Waiting for pod downwardapi-volume-2059c988-5899-4a77-88de-d8fe946ed390 to disappear May 30 00:14:38.182: INFO: Pod downwardapi-volume-2059c988-5899-4a77-88de-d8fe946ed390 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:14:38.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1242" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":106,"skipped":1541,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:14:38.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 30 00:14:38.307: INFO: Waiting up to 5m0s for pod "pod-99009657-448c-42be-9da9-adcc6a6ed3eb" in namespace "emptydir-8523" to be "Succeeded or Failed" May 30 00:14:38.312: INFO: Pod "pod-99009657-448c-42be-9da9-adcc6a6ed3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.453899ms May 30 00:14:40.316: INFO: Pod "pod-99009657-448c-42be-9da9-adcc6a6ed3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009196449s May 30 00:14:42.319: INFO: Pod "pod-99009657-448c-42be-9da9-adcc6a6ed3eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01210256s STEP: Saw pod success May 30 00:14:42.319: INFO: Pod "pod-99009657-448c-42be-9da9-adcc6a6ed3eb" satisfied condition "Succeeded or Failed" May 30 00:14:42.321: INFO: Trying to get logs from node latest-worker pod pod-99009657-448c-42be-9da9-adcc6a6ed3eb container test-container: STEP: delete the pod May 30 00:14:42.357: INFO: Waiting for pod pod-99009657-448c-42be-9da9-adcc6a6ed3eb to disappear May 30 00:14:42.367: INFO: Pod pod-99009657-448c-42be-9da9-adcc6a6ed3eb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:14:42.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8523" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":107,"skipped":1554,"failed":0} SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:14:42.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:14:42.462: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9420 I0530 00:14:42.483660 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9420, replica count: 1 I0530 00:14:43.534089 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:14:44.534380 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:14:45.534699 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:14:46.534988 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:14:47.535198 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 00:14:47.664: INFO: Created: latency-svc-6jshv May 30 00:14:47.726: INFO: Got endpoints: latency-svc-6jshv [91.577025ms] May 30 00:14:47.760: INFO: Created: latency-svc-ldjj6 May 30 00:14:47.797: INFO: Got endpoints: latency-svc-ldjj6 [70.361764ms] May 30 00:14:47.856: INFO: Created: latency-svc-26pvr May 30 00:14:47.865: INFO: Got endpoints: latency-svc-26pvr [138.161795ms] May 30 00:14:47.894: INFO: Created: latency-svc-h2wj2 May 30 00:14:47.929: INFO: Got endpoints: latency-svc-h2wj2 [202.167195ms] May 30 00:14:48.011: INFO: Created: latency-svc-s6pfl May 30 00:14:48.067: INFO: Got endpoints: latency-svc-s6pfl [339.915554ms] May 30 00:14:48.070: INFO: Created: latency-svc-zsxq8 May 30 00:14:48.109: INFO: Got endpoints: latency-svc-zsxq8 [382.02491ms] May 30 00:14:48.168: INFO: Created: latency-svc-8c8kj May 30 00:14:48.184: INFO: Got endpoints: latency-svc-8c8kj [457.544303ms] May 30 00:14:48.204: INFO: Created: latency-svc-rzhcl May 30 00:14:48.214: INFO: Got endpoints: latency-svc-rzhcl [487.679148ms] May 30 00:14:48.323: INFO: Created: latency-svc-glg8l May 30 00:14:48.378: INFO: Got endpoints: latency-svc-glg8l [651.561974ms] May 30 00:14:48.408: INFO: Created: latency-svc-gvqj9 May 30 00:14:48.459: INFO: Got endpoints: latency-svc-gvqj9 [732.066051ms] May 30 00:14:48.480: INFO: Created: latency-svc-wh9qp May 30 00:14:48.494: INFO: Got endpoints: latency-svc-wh9qp [767.80647ms] May 30 00:14:48.534: INFO: Created: latency-svc-26bdk May 30 00:14:48.543: INFO: Got endpoints: latency-svc-26bdk [816.144346ms] May 30 00:14:48.628: INFO: Created: latency-svc-vvz6b May 30 00:14:48.633: INFO: Got endpoints: latency-svc-vvz6b [906.662479ms] May 30 00:14:48.715: INFO: Created: latency-svc-9l8ml May 30 00:14:48.766: INFO: Got endpoints: latency-svc-9l8ml [1.038665125s] May 30 00:14:48.780: INFO: Created: latency-svc-5jsfl May 30 00:14:48.828: INFO: Got endpoints: latency-svc-5jsfl [1.101266975s] May 30 00:14:48.864: INFO: Created: latency-svc-m5q2d May 30 00:14:48.903: INFO: Got endpoints: latency-svc-m5q2d [1.176312571s] May 30 00:14:48.924: INFO: Created: latency-svc-c82ph May 30 00:14:48.972: INFO: Got endpoints: latency-svc-c82ph [1.174692671s] May 30 00:14:49.053: INFO: Created: latency-svc-5258h May 30 00:14:49.094: INFO: Created: latency-svc-jht5p May 30 00:14:49.094: INFO: Got endpoints: latency-svc-5258h [1.228875442s] May 30 00:14:49.122: INFO: Got endpoints: latency-svc-jht5p [1.193556669s] May 30 00:14:49.209: INFO: Created: latency-svc-tnddv May 30 00:14:49.237: INFO: Got endpoints: latency-svc-tnddv [1.169903777s] May 30 00:14:49.260: INFO: Created: latency-svc-278jf May 30 00:14:49.268: INFO: Got endpoints: latency-svc-278jf [1.159214721s] May 30 00:14:49.308: INFO: Created: latency-svc-526ld May 30 00:14:49.364: INFO: Created: latency-svc-tphdt May 30 00:14:49.364: INFO: Got endpoints: latency-svc-526ld [1.179727483s] May 30 00:14:49.386: INFO: Got endpoints: latency-svc-tphdt [1.171623044s] May 30 00:14:49.429: INFO: Created: latency-svc-zqppr May 30 00:14:49.444: INFO: Got endpoints: latency-svc-zqppr [1.06515189s] May 30 00:14:49.508: INFO: Created: latency-svc-m9hfj May 30 00:14:49.516: INFO: Got endpoints: latency-svc-m9hfj [1.057308094s] May 30 00:14:49.542: INFO: Created: latency-svc-jcxkg May 30 00:14:49.553: INFO: Got endpoints: latency-svc-jcxkg [1.05863445s] May 30 00:14:49.578: INFO: Created: latency-svc-zrbkn May 30 00:14:49.590: INFO: Got endpoints: latency-svc-zrbkn [1.046486125s] May 30 00:14:49.658: INFO: Created: latency-svc-8jtln May 30 00:14:49.667: INFO: Got endpoints: latency-svc-8jtln [1.03321367s] May 30 00:14:49.699: INFO: Created: latency-svc-8x868 May 30 00:14:49.715: INFO: Got endpoints: latency-svc-8x868 [949.442841ms] May 30 00:14:49.802: INFO: Created: latency-svc-whm8h May 30 00:14:49.831: INFO: Got endpoints: latency-svc-whm8h [1.002444981s] May 30 00:14:49.831: INFO: Created: latency-svc-db7kz May 30 00:14:49.872: INFO: Got endpoints: latency-svc-db7kz [968.493998ms] May 30 00:14:49.975: INFO: Created: latency-svc-tf5kv May 30 00:14:50.004: INFO: Got endpoints: latency-svc-tf5kv [1.032208467s] May 30 00:14:50.007: INFO: Created: latency-svc-85rxl May 30 00:14:50.034: INFO: Got endpoints: latency-svc-85rxl [939.651344ms] May 30 00:14:50.064: INFO: Created: latency-svc-9rz4w May 30 00:14:50.126: INFO: Got endpoints: latency-svc-9rz4w [1.00336536s] May 30 00:14:50.142: INFO: Created: latency-svc-bbbj2 May 30 00:14:50.159: INFO: Got endpoints: latency-svc-bbbj2 [922.029923ms] May 30 00:14:50.190: INFO: Created: latency-svc-l9gzw May 30 00:14:50.213: INFO: Got endpoints: latency-svc-l9gzw [944.667851ms] May 30 00:14:50.269: INFO: Created: latency-svc-qq95g May 30 00:14:50.288: INFO: Got endpoints: latency-svc-qq95g [923.365105ms] May 30 00:14:50.352: INFO: Created: latency-svc-68k8m May 30 00:14:50.424: INFO: Got endpoints: latency-svc-68k8m [1.038136209s] May 30 00:14:50.430: INFO: Created: latency-svc-xh2l5 May 30 00:14:50.466: INFO: Got endpoints: latency-svc-xh2l5 [1.022195097s] May 30 00:14:50.568: INFO: Created: latency-svc-d9hxd May 30 00:14:50.598: INFO: Got endpoints: latency-svc-d9hxd [1.081685498s] May 30 00:14:50.630: INFO: Created: latency-svc-nbjm4 May 30 00:14:50.646: INFO: Got endpoints: latency-svc-nbjm4 [1.092891896s] May 30 00:14:50.712: INFO: Created: latency-svc-dvdm2 May 30 00:14:50.730: INFO: Got endpoints: latency-svc-dvdm2 [1.139784235s] May 30 00:14:50.766: INFO: Created: latency-svc-4qnw4 May 30 00:14:50.779: INFO: Got endpoints: latency-svc-4qnw4 [1.11192696s] May 30 00:14:50.861: INFO: Created: latency-svc-jrs57 May 30 00:14:50.867: INFO: Got endpoints: latency-svc-jrs57 [1.151414426s] May 30 00:14:50.910: INFO: Created: latency-svc-j2t8m May 30 00:14:50.927: INFO: Got endpoints: latency-svc-j2t8m [1.096725025s] May 30 00:14:50.952: INFO: Created: latency-svc-lf7f5 May 30 00:14:50.993: INFO: Got endpoints: latency-svc-lf7f5 [1.121388273s] May 30 00:14:51.008: INFO: Created: latency-svc-g7tkf May 30 00:14:51.035: INFO: Got endpoints: latency-svc-g7tkf [1.031174637s] May 30 00:14:51.060: INFO: Created: latency-svc-pzzhm May 30 00:14:51.077: INFO: Got endpoints: latency-svc-pzzhm [1.043787259s] May 30 00:14:51.131: INFO: Created: latency-svc-mdn8v May 30 00:14:51.168: INFO: Got endpoints: latency-svc-mdn8v [1.041661936s] May 30 00:14:51.212: INFO: Created: latency-svc-d9n54 May 30 00:14:51.312: INFO: Got endpoints: latency-svc-d9n54 [1.153461539s] May 30 00:14:51.317: INFO: Created: latency-svc-wkvzn May 30 00:14:51.325: INFO: Got endpoints: latency-svc-wkvzn [1.111909912s] May 30 00:14:51.354: INFO: Created: latency-svc-s5d4s May 30 00:14:51.383: INFO: Got endpoints: latency-svc-s5d4s [1.095771605s] May 30 00:14:51.454: INFO: Created: latency-svc-gxkft May 30 00:14:51.458: INFO: Got endpoints: latency-svc-gxkft [1.033815176s] May 30 00:14:51.491: INFO: Created: latency-svc-g4wsg May 30 00:14:51.506: INFO: Got endpoints: latency-svc-g4wsg [1.040194723s] May 30 00:14:51.528: INFO: Created: latency-svc-qph55 May 30 00:14:51.542: INFO: Got endpoints: latency-svc-qph55 [944.371533ms] May 30 00:14:51.592: INFO: Created: latency-svc-7xh7d May 30 00:14:51.603: INFO: Got endpoints: latency-svc-7xh7d [956.410067ms] May 30 00:14:51.629: INFO: Created: latency-svc-zz7dj May 30 00:14:51.648: INFO: Got endpoints: latency-svc-zz7dj [918.277639ms] May 30 00:14:51.683: INFO: Created: latency-svc-b9wbv May 30 00:14:51.760: INFO: Got endpoints: latency-svc-b9wbv [980.814496ms] May 30 00:14:51.763: INFO: Created: latency-svc-7spkv May 30 00:14:51.766: INFO: Got endpoints: latency-svc-7spkv [898.979066ms] May 30 00:14:51.815: INFO: Created: latency-svc-2vf7m May 30 00:14:51.826: INFO: Got endpoints: latency-svc-2vf7m [898.878391ms] May 30 00:14:51.851: INFO: Created: latency-svc-nkcs4 May 30 00:14:51.921: INFO: Got endpoints: latency-svc-nkcs4 [928.040905ms] May 30 00:14:51.923: INFO: Created: latency-svc-vgsj2 May 30 00:14:51.929: INFO: Got endpoints: latency-svc-vgsj2 [893.503324ms] May 30 00:14:51.953: INFO: Created: latency-svc-xjxqt May 30 00:14:51.972: INFO: Got endpoints: latency-svc-xjxqt [894.184836ms] May 30 00:14:52.016: INFO: Created: latency-svc-4dps6 May 30 00:14:52.071: INFO: Got endpoints: latency-svc-4dps6 [903.337538ms] May 30 00:14:52.103: INFO: Created: latency-svc-xzq86 May 30 00:14:52.123: INFO: Got endpoints: latency-svc-xzq86 [810.923065ms] May 30 00:14:52.221: INFO: Created: latency-svc-4vx2v May 30 00:14:52.248: INFO: Got endpoints: latency-svc-4vx2v [923.316425ms] May 30 00:14:52.250: INFO: Created: latency-svc-nlpzx May 30 00:14:52.261: INFO: Got endpoints: latency-svc-nlpzx [877.652733ms] May 30 00:14:52.283: INFO: Created: latency-svc-fgpkf May 30 00:14:52.309: INFO: Got endpoints: latency-svc-fgpkf [851.44924ms] May 30 00:14:52.388: INFO: Created: latency-svc-dcbc2 May 30 00:14:52.394: INFO: Got endpoints: latency-svc-dcbc2 [887.647494ms] May 30 00:14:52.438: INFO: Created: latency-svc-6z8rf May 30 00:14:52.487: INFO: Got endpoints: latency-svc-6z8rf [945.140353ms] May 30 00:14:52.550: INFO: Created: latency-svc-nmhsb May 30 00:14:52.559: INFO: Got endpoints: latency-svc-nmhsb [956.33729ms] May 30 00:14:52.614: INFO: Created: latency-svc-k47tk May 30 00:14:52.631: INFO: Got endpoints: latency-svc-k47tk [982.999326ms] May 30 00:14:52.724: INFO: Created: latency-svc-h2xfb May 30 00:14:52.727: INFO: Got endpoints: latency-svc-h2xfb [967.150839ms] May 30 00:14:52.758: INFO: Created: latency-svc-k8qsd May 30 00:14:52.782: INFO: Got endpoints: latency-svc-k8qsd [1.016554772s] May 30 00:14:52.805: INFO: Created: latency-svc-6kzhh May 30 00:14:52.818: INFO: Got endpoints: latency-svc-6kzhh [992.021009ms] May 30 00:14:52.878: INFO: Created: latency-svc-4fbgk May 30 00:14:52.925: INFO: Got endpoints: latency-svc-4fbgk [1.003919366s] May 30 00:14:52.999: INFO: Created: latency-svc-k2q7v May 30 00:14:53.002: INFO: Got endpoints: latency-svc-k2q7v [1.073221099s] May 30 00:14:53.063: INFO: Created: latency-svc-vhr5n May 30 00:14:53.178: INFO: Got endpoints: latency-svc-vhr5n [1.20671715s] May 30 00:14:53.182: INFO: Created: latency-svc-znf7p May 30 00:14:53.191: INFO: Got endpoints: latency-svc-znf7p [1.120211736s] May 30 00:14:53.249: INFO: Created: latency-svc-n7nz9 May 30 00:14:53.276: INFO: Got endpoints: latency-svc-n7nz9 [1.152963424s] May 30 00:14:53.359: INFO: Created: latency-svc-x7bvt May 30 00:14:53.367: INFO: Got endpoints: latency-svc-x7bvt [1.118773867s] May 30 00:14:53.387: INFO: Created: latency-svc-xlbjn May 30 00:14:53.402: INFO: Got endpoints: latency-svc-xlbjn [1.141340396s] May 30 00:14:53.431: INFO: Created: latency-svc-24wzw May 30 00:14:53.439: INFO: Got endpoints: latency-svc-24wzw [1.12900924s] May 30 00:14:53.508: INFO: Created: latency-svc-5n7qz May 30 00:14:53.512: INFO: Got endpoints: latency-svc-5n7qz [1.117856427s] May 30 00:14:53.543: INFO: Created: latency-svc-7gvzw May 30 00:14:53.560: INFO: Got endpoints: latency-svc-7gvzw [1.072361329s] May 30 00:14:53.604: INFO: Created: latency-svc-rkr2w May 30 00:14:53.669: INFO: Got endpoints: latency-svc-rkr2w [1.110109655s] May 30 00:14:53.717: INFO: Created: latency-svc-t9dft May 30 00:14:53.729: INFO: Got endpoints: latency-svc-t9dft [1.097535848s] May 30 00:14:53.802: INFO: Created: latency-svc-tkzwc May 30 00:14:53.807: INFO: Got endpoints: latency-svc-tkzwc [1.079791663s] May 30 00:14:53.831: INFO: Created: latency-svc-z7fbc May 30 00:14:53.849: INFO: Got endpoints: latency-svc-z7fbc [1.067045758s] May 30 00:14:53.982: INFO: Created: latency-svc-vmb8p May 30 00:14:53.986: INFO: Got endpoints: latency-svc-vmb8p [1.167296258s] May 30 00:14:54.041: INFO: Created: latency-svc-xjkmg May 30 00:14:54.078: INFO: Got endpoints: latency-svc-xjkmg [1.153006311s] May 30 00:14:54.137: INFO: Created: latency-svc-pff6f May 30 00:14:54.140: INFO: Got endpoints: latency-svc-pff6f [1.137779585s] May 30 00:14:54.173: INFO: Created: latency-svc-6fnqj May 30 00:14:54.188: INFO: Got endpoints: latency-svc-6fnqj [1.00936315s] May 30 00:14:54.215: INFO: Created: latency-svc-btkpj May 30 00:14:54.229: INFO: Got endpoints: latency-svc-btkpj [1.038146913s] May 30 00:14:54.286: INFO: Created: latency-svc-g7h98 May 30 00:14:54.317: INFO: Created: latency-svc-bzz94 May 30 00:14:54.318: INFO: Got endpoints: latency-svc-g7h98 [1.041407788s] May 30 00:14:54.341: INFO: Got endpoints: latency-svc-bzz94 [974.008261ms] May 30 00:14:54.378: INFO: Created: latency-svc-mnpxs May 30 00:14:54.424: INFO: Got endpoints: latency-svc-mnpxs [1.021348953s] May 30 00:14:54.450: INFO: Created: latency-svc-kg976 May 30 00:14:54.465: INFO: Got endpoints: latency-svc-kg976 [1.026215864s] May 30 00:14:54.509: INFO: Created: latency-svc-ggmtg May 30 00:14:54.587: INFO: Got endpoints: latency-svc-ggmtg [1.075183529s] May 30 00:14:54.589: INFO: Created: latency-svc-krj5s May 30 00:14:54.617: INFO: Got endpoints: latency-svc-krj5s [1.057271098s] May 30 00:14:54.665: INFO: Created: latency-svc-8qnrm May 30 00:14:54.682: INFO: Got endpoints: latency-svc-8qnrm [1.012742482s] May 30 00:14:54.755: INFO: Created: latency-svc-vvc6m May 30 00:14:54.791: INFO: Got endpoints: latency-svc-vvc6m [1.062719148s] May 30 00:14:54.833: INFO: Created: latency-svc-qcjgw May 30 00:14:54.915: INFO: Got endpoints: latency-svc-qcjgw [1.108568024s] May 30 00:14:54.935: INFO: Created: latency-svc-bjjkk May 30 00:14:54.957: INFO: Got endpoints: latency-svc-bjjkk [1.107584667s] May 30 00:14:54.984: INFO: Created: latency-svc-jhs5s May 30 00:14:54.999: INFO: Got endpoints: latency-svc-jhs5s [1.013412399s] May 30 00:14:55.067: INFO: Created: latency-svc-8m5q9 May 30 00:14:55.072: INFO: Got endpoints: latency-svc-8m5q9 [993.208207ms] May 30 00:14:55.109: INFO: Created: latency-svc-tktkq May 30 00:14:55.120: INFO: Got endpoints: latency-svc-tktkq [979.942152ms] May 30 00:14:55.151: INFO: Created: latency-svc-hwcqp May 30 00:14:55.233: INFO: Got endpoints: latency-svc-hwcqp [1.044572559s] May 30 00:14:55.265: INFO: Created: latency-svc-fgt6q May 30 00:14:55.283: INFO: Got endpoints: latency-svc-fgt6q [1.053430727s] May 30 00:14:55.307: INFO: Created: latency-svc-58hd9 May 30 00:14:55.319: INFO: Got endpoints: latency-svc-58hd9 [1.00147305s] May 30 00:14:55.377: INFO: Created: latency-svc-hdz7z May 30 00:14:55.404: INFO: Got endpoints: latency-svc-hdz7z [1.063210056s] May 30 00:14:55.428: INFO: Created: latency-svc-vt8qb May 30 00:14:55.452: INFO: Got endpoints: latency-svc-vt8qb [1.028618181s] May 30 00:14:55.475: INFO: Created: latency-svc-2dkz4 May 30 00:14:55.550: INFO: Got endpoints: latency-svc-2dkz4 [1.085225226s] May 30 00:14:55.552: INFO: Created: latency-svc-v7vkh May 30 00:14:55.560: INFO: Got endpoints: latency-svc-v7vkh [973.404952ms] May 30 00:14:55.613: INFO: Created: latency-svc-p452w May 30 00:14:55.644: INFO: Got endpoints: latency-svc-p452w [1.026363168s] May 30 00:14:55.700: INFO: Created: latency-svc-k7vtl May 30 00:14:55.706: INFO: Got endpoints: latency-svc-k7vtl [1.023957705s] May 30 00:14:55.727: INFO: Created: latency-svc-9wvdm May 30 00:14:55.736: INFO: Got endpoints: latency-svc-9wvdm [944.695975ms] May 30 00:14:55.850: INFO: Created: latency-svc-bsgvq May 30 00:14:55.862: INFO: Got endpoints: latency-svc-bsgvq [946.741061ms] May 30 00:14:55.890: INFO: Created: latency-svc-qhqz4 May 30 00:14:55.905: INFO: Got endpoints: latency-svc-qhqz4 [947.890368ms] May 30 00:14:55.925: INFO: Created: latency-svc-b5tmn May 30 00:14:55.941: INFO: Got endpoints: latency-svc-b5tmn [942.034433ms] May 30 00:14:55.999: INFO: Created: latency-svc-9cq6n May 30 00:14:56.002: INFO: Got endpoints: latency-svc-9cq6n [930.909891ms] May 30 00:14:56.155: INFO: Created: latency-svc-mhmcn May 30 00:14:56.159: INFO: Got endpoints: latency-svc-mhmcn [1.038898508s] May 30 00:14:56.219: INFO: Created: latency-svc-tbsvp May 30 00:14:56.250: INFO: Got endpoints: latency-svc-tbsvp [1.017307586s] May 30 00:14:56.311: INFO: Created: latency-svc-fzmkx May 30 00:14:56.314: INFO: Got endpoints: latency-svc-fzmkx [1.031327129s] May 30 00:14:56.363: INFO: Created: latency-svc-nfd4x May 30 00:14:56.381: INFO: Got endpoints: latency-svc-nfd4x [1.061651557s] May 30 00:14:56.411: INFO: Created: latency-svc-gnbcq May 30 00:14:56.466: INFO: Got endpoints: latency-svc-gnbcq [1.061538276s] May 30 00:14:56.501: INFO: Created: latency-svc-wr8ml May 30 00:14:56.526: INFO: Got endpoints: latency-svc-wr8ml [1.073228881s] May 30 00:14:56.634: INFO: Created: latency-svc-d6shb May 30 00:14:56.649: INFO: Got endpoints: latency-svc-d6shb [1.099019091s] May 30 00:14:56.675: INFO: Created: latency-svc-7b56c May 30 00:14:56.687: INFO: Got endpoints: latency-svc-7b56c [1.126147277s] May 30 00:14:56.778: INFO: Created: latency-svc-mzfmd May 30 00:14:56.781: INFO: Got endpoints: latency-svc-mzfmd [1.137136002s] May 30 00:14:56.807: INFO: Created: latency-svc-42rf4 May 30 00:14:56.824: INFO: Got endpoints: latency-svc-42rf4 [1.118304716s] May 30 00:14:56.861: INFO: Created: latency-svc-kdnmk May 30 00:14:56.951: INFO: Got endpoints: latency-svc-kdnmk [1.214876079s] May 30 00:14:56.952: INFO: Created: latency-svc-tskgd May 30 00:14:56.963: INFO: Got endpoints: latency-svc-tskgd [1.101201869s] May 30 00:14:57.011: INFO: Created: latency-svc-q4dmg May 30 00:14:57.024: INFO: Got endpoints: latency-svc-q4dmg [1.118727404s] May 30 00:14:57.113: INFO: Created: latency-svc-kphh9 May 30 00:14:57.119: INFO: Got endpoints: latency-svc-kphh9 [1.178102275s] May 30 00:14:57.144: INFO: Created: latency-svc-rr6xv May 30 00:14:57.156: INFO: Got endpoints: latency-svc-rr6xv [1.15363859s] May 30 00:14:57.280: INFO: Created: latency-svc-lswvq May 30 00:14:57.311: INFO: Got endpoints: latency-svc-lswvq [1.147512731s] May 30 00:14:57.313: INFO: Created: latency-svc-5c4qm May 30 00:14:57.325: INFO: Got endpoints: latency-svc-5c4qm [1.075028418s] May 30 00:14:57.353: INFO: Created: latency-svc-lt2hv May 30 00:14:57.367: INFO: Got endpoints: latency-svc-lt2hv [1.052820348s] May 30 00:14:57.437: INFO: Created: latency-svc-wsk2w May 30 00:14:57.440: INFO: Got endpoints: latency-svc-wsk2w [1.058647661s] May 30 00:14:57.473: INFO: Created: latency-svc-2gmsp May 30 00:14:57.482: INFO: Got endpoints: latency-svc-2gmsp [1.016117556s] May 30 00:14:57.509: INFO: Created: latency-svc-5plsw May 30 00:14:57.525: INFO: Got endpoints: latency-svc-5plsw [998.956097ms] May 30 00:14:57.598: INFO: Created: latency-svc-h8vpw May 30 00:14:57.613: INFO: Got endpoints: latency-svc-h8vpw [963.330781ms] May 30 00:14:57.640: INFO: Created: latency-svc-hhdz6 May 30 00:14:57.652: INFO: Got endpoints: latency-svc-hhdz6 [965.578237ms] May 30 00:14:57.677: INFO: Created: latency-svc-pkqp6 May 30 00:14:57.735: INFO: Got endpoints: latency-svc-pkqp6 [954.688499ms] May 30 00:14:57.749: INFO: Created: latency-svc-9drmh May 30 00:14:57.767: INFO: Got endpoints: latency-svc-9drmh [942.426737ms] May 30 00:14:57.797: INFO: Created: latency-svc-9frhh May 30 00:14:57.815: INFO: Got endpoints: latency-svc-9frhh [864.10415ms] May 30 00:14:57.874: INFO: Created: latency-svc-dnf46 May 30 00:14:57.877: INFO: Got endpoints: latency-svc-dnf46 [913.647086ms] May 30 00:14:57.905: INFO: Created: latency-svc-d4cgz May 30 00:14:57.918: INFO: Got endpoints: latency-svc-d4cgz [894.444178ms] May 30 00:14:57.947: INFO: Created: latency-svc-5wmsx May 30 00:14:58.011: INFO: Got endpoints: latency-svc-5wmsx [891.761171ms] May 30 00:14:58.031: INFO: Created: latency-svc-5wksm May 30 00:14:58.044: INFO: Got endpoints: latency-svc-5wksm [888.261135ms] May 30 00:14:58.066: INFO: Created: latency-svc-6g2l2 May 30 00:14:58.109: INFO: Got endpoints: latency-svc-6g2l2 [797.069324ms] May 30 00:14:58.167: INFO: Created: latency-svc-nt5jv May 30 00:14:58.171: INFO: Got endpoints: latency-svc-nt5jv [846.326391ms] May 30 00:14:58.199: INFO: Created: latency-svc-v9879 May 30 00:14:58.214: INFO: Got endpoints: latency-svc-v9879 [847.232919ms] May 30 00:14:58.259: INFO: Created: latency-svc-4wwps May 30 00:14:58.323: INFO: Got endpoints: latency-svc-4wwps [882.994452ms] May 30 00:14:58.324: INFO: Created: latency-svc-kc2g5 May 30 00:14:58.338: INFO: Got endpoints: latency-svc-kc2g5 [856.028551ms] May 30 00:14:58.379: INFO: Created: latency-svc-ppxpg May 30 00:14:58.395: INFO: Got endpoints: latency-svc-ppxpg [870.377909ms] May 30 00:14:58.416: INFO: Created: latency-svc-mbxqq May 30 00:14:58.516: INFO: Got endpoints: latency-svc-mbxqq [903.906083ms] May 30 00:14:58.547: INFO: Created: latency-svc-pcmst May 30 00:14:58.571: INFO: Got endpoints: latency-svc-pcmst [918.457778ms] May 30 00:14:58.646: INFO: Created: latency-svc-82x47 May 30 00:14:58.673: INFO: Got endpoints: latency-svc-82x47 [937.747065ms] May 30 00:14:58.714: INFO: Created: latency-svc-4q9hb May 30 00:14:58.727: INFO: Got endpoints: latency-svc-4q9hb [960.048516ms] May 30 00:14:58.784: INFO: Created: latency-svc-8nwzc May 30 00:14:58.817: INFO: Got endpoints: latency-svc-8nwzc [1.001421183s] May 30 00:14:58.818: INFO: Created: latency-svc-nq8rb May 30 00:14:58.853: INFO: Got endpoints: latency-svc-nq8rb [976.234499ms] May 30 00:14:58.945: INFO: Created: latency-svc-c2nbq May 30 00:14:58.948: INFO: Got endpoints: latency-svc-c2nbq [1.030292412s] May 30 00:14:58.991: INFO: Created: latency-svc-xbltx May 30 00:14:59.005: INFO: Got endpoints: latency-svc-xbltx [993.600123ms] May 30 00:14:59.039: INFO: Created: latency-svc-jpg9p May 30 00:14:59.082: INFO: Got endpoints: latency-svc-jpg9p [1.03796059s] May 30 00:14:59.123: INFO: Created: latency-svc-7vp5s May 30 00:14:59.137: INFO: Got endpoints: latency-svc-7vp5s [1.028808679s] May 30 00:14:59.171: INFO: Created: latency-svc-gclcj May 30 00:14:59.227: INFO: Got endpoints: latency-svc-gclcj [1.055655842s] May 30 00:14:59.243: INFO: Created: latency-svc-gkdr9 May 30 00:14:59.269: INFO: Got endpoints: latency-svc-gkdr9 [1.054435183s] May 30 00:14:59.303: INFO: Created: latency-svc-nk6n2 May 30 00:14:59.400: INFO: Got endpoints: latency-svc-nk6n2 [1.077336283s] May 30 00:14:59.402: INFO: Created: latency-svc-r6g5z May 30 00:14:59.414: INFO: Got endpoints: latency-svc-r6g5z [1.076203857s] May 30 00:14:59.434: INFO: Created: latency-svc-847tr May 30 00:14:59.458: INFO: Got endpoints: latency-svc-847tr [1.06304734s] May 30 00:14:59.488: INFO: Created: latency-svc-5pgtm May 30 00:14:59.556: INFO: Got endpoints: latency-svc-5pgtm [1.039185721s] May 30 00:14:59.558: INFO: Created: latency-svc-fxd7g May 30 00:14:59.579: INFO: Got endpoints: latency-svc-fxd7g [1.007974022s] May 30 00:14:59.624: INFO: Created: latency-svc-dsjft May 30 00:14:59.706: INFO: Created: latency-svc-fb2ld May 30 00:14:59.706: INFO: Got endpoints: latency-svc-dsjft [1.032885388s] May 30 00:14:59.710: INFO: Got endpoints: latency-svc-fb2ld [982.948926ms] May 30 00:14:59.765: INFO: Created: latency-svc-cjjh2 May 30 00:14:59.782: INFO: Got endpoints: latency-svc-cjjh2 [965.02322ms] May 30 00:14:59.802: INFO: Created: latency-svc-22d5v May 30 00:14:59.862: INFO: Got endpoints: latency-svc-22d5v [1.008534129s] May 30 00:14:59.885: INFO: Created: latency-svc-58wrp May 30 00:14:59.921: INFO: Got endpoints: latency-svc-58wrp [972.310667ms] May 30 00:15:00.031: INFO: Created: latency-svc-7m7kd May 30 00:15:00.033: INFO: Got endpoints: latency-svc-7m7kd [1.02765334s] May 30 00:15:00.083: INFO: Created: latency-svc-4jvr4 May 30 00:15:00.103: INFO: Got endpoints: latency-svc-4jvr4 [1.020010697s] May 30 00:15:00.125: INFO: Created: latency-svc-44wq8 May 30 00:15:00.191: INFO: Got endpoints: latency-svc-44wq8 [1.053977532s] May 30 00:15:00.196: INFO: Created: latency-svc-xcltp May 30 00:15:00.204: INFO: Got endpoints: latency-svc-xcltp [976.592365ms] May 30 00:15:00.233: INFO: Created: latency-svc-dbwcg May 30 00:15:00.247: INFO: Got endpoints: latency-svc-dbwcg [977.615319ms] May 30 00:15:00.275: INFO: Created: latency-svc-wn748 May 30 00:15:00.330: INFO: Got endpoints: latency-svc-wn748 [929.581765ms] May 30 00:15:00.359: INFO: Created: latency-svc-49wff May 30 00:15:00.367: INFO: Got endpoints: latency-svc-49wff [952.75404ms] May 30 00:15:00.407: INFO: Created: latency-svc-zjl6p May 30 00:15:00.421: INFO: Got endpoints: latency-svc-zjl6p [962.695571ms] May 30 00:15:00.490: INFO: Created: latency-svc-2cvkr May 30 00:15:00.506: INFO: Got endpoints: latency-svc-2cvkr [950.26534ms] May 30 00:15:00.557: INFO: Created: latency-svc-pxk5z May 30 00:15:00.579: INFO: Got endpoints: latency-svc-pxk5z [999.792908ms] May 30 00:15:00.640: INFO: Created: latency-svc-z4t5c May 30 00:15:00.644: INFO: Got endpoints: latency-svc-z4t5c [937.508761ms] May 30 00:15:00.681: INFO: Created: latency-svc-8tphf May 30 00:15:00.694: INFO: Got endpoints: latency-svc-8tphf [983.702128ms] May 30 00:15:00.718: INFO: Created: latency-svc-7xpwb May 30 00:15:00.736: INFO: Got endpoints: latency-svc-7xpwb [953.708376ms] May 30 00:15:00.801: INFO: Created: latency-svc-p96n2 May 30 00:15:00.809: INFO: Got endpoints: latency-svc-p96n2 [946.898223ms] May 30 00:15:00.833: INFO: Created: latency-svc-sppr6 May 30 00:15:00.854: INFO: Got endpoints: latency-svc-sppr6 [933.306299ms] May 30 00:15:00.886: INFO: Created: latency-svc-8slm2 May 30 00:15:00.976: INFO: Got endpoints: latency-svc-8slm2 [943.067444ms] May 30 00:15:00.978: INFO: Created: latency-svc-bszmw May 30 00:15:00.995: INFO: Got endpoints: latency-svc-bszmw [892.362088ms] May 30 00:15:01.037: INFO: Created: latency-svc-68tdp May 30 00:15:01.062: INFO: Got endpoints: latency-svc-68tdp [870.744377ms] May 30 00:15:01.119: INFO: Created: latency-svc-c4klh May 30 00:15:01.130: INFO: Got endpoints: latency-svc-c4klh [925.960241ms] May 30 00:15:01.158: INFO: Created: latency-svc-9469j May 30 00:15:01.173: INFO: Got endpoints: latency-svc-9469j [925.928928ms] May 30 00:15:01.317: INFO: Created: latency-svc-74zz2 May 30 00:15:01.355: INFO: Got endpoints: latency-svc-74zz2 [1.025009724s] May 30 00:15:01.355: INFO: Created: latency-svc-b7qt5 May 30 00:15:01.371: INFO: Got endpoints: latency-svc-b7qt5 [1.003762943s] May 30 00:15:01.371: INFO: Latencies: [70.361764ms 138.161795ms 202.167195ms 339.915554ms 382.02491ms 457.544303ms 487.679148ms 651.561974ms 732.066051ms 767.80647ms 797.069324ms 810.923065ms 816.144346ms 846.326391ms 847.232919ms 851.44924ms 856.028551ms 864.10415ms 870.377909ms 870.744377ms 877.652733ms 882.994452ms 887.647494ms 888.261135ms 891.761171ms 892.362088ms 893.503324ms 894.184836ms 894.444178ms 898.878391ms 898.979066ms 903.337538ms 903.906083ms 906.662479ms 913.647086ms 918.277639ms 918.457778ms 922.029923ms 923.316425ms 923.365105ms 925.928928ms 925.960241ms 928.040905ms 929.581765ms 930.909891ms 933.306299ms 937.508761ms 937.747065ms 939.651344ms 942.034433ms 942.426737ms 943.067444ms 944.371533ms 944.667851ms 944.695975ms 945.140353ms 946.741061ms 946.898223ms 947.890368ms 949.442841ms 950.26534ms 952.75404ms 953.708376ms 954.688499ms 956.33729ms 956.410067ms 960.048516ms 962.695571ms 963.330781ms 965.02322ms 965.578237ms 967.150839ms 968.493998ms 972.310667ms 973.404952ms 974.008261ms 976.234499ms 976.592365ms 977.615319ms 979.942152ms 980.814496ms 982.948926ms 982.999326ms 983.702128ms 992.021009ms 993.208207ms 993.600123ms 998.956097ms 999.792908ms 1.001421183s 1.00147305s 1.002444981s 1.00336536s 1.003762943s 1.003919366s 1.007974022s 1.008534129s 1.00936315s 1.012742482s 1.013412399s 1.016117556s 1.016554772s 1.017307586s 1.020010697s 1.021348953s 1.022195097s 1.023957705s 1.025009724s 1.026215864s 1.026363168s 1.02765334s 1.028618181s 1.028808679s 1.030292412s 1.031174637s 1.031327129s 1.032208467s 1.032885388s 1.03321367s 1.033815176s 1.03796059s 1.038136209s 1.038146913s 1.038665125s 1.038898508s 1.039185721s 1.040194723s 1.041407788s 1.041661936s 1.043787259s 1.044572559s 1.046486125s 1.052820348s 1.053430727s 1.053977532s 1.054435183s 1.055655842s 1.057271098s 1.057308094s 1.05863445s 1.058647661s 1.061538276s 1.061651557s 1.062719148s 1.06304734s 1.063210056s 1.06515189s 1.067045758s 1.072361329s 1.073221099s 1.073228881s 1.075028418s 1.075183529s 1.076203857s 1.077336283s 1.079791663s 1.081685498s 1.085225226s 1.092891896s 1.095771605s 1.096725025s 1.097535848s 1.099019091s 1.101201869s 1.101266975s 1.107584667s 1.108568024s 1.110109655s 1.111909912s 1.11192696s 1.117856427s 1.118304716s 1.118727404s 1.118773867s 1.120211736s 1.121388273s 1.126147277s 1.12900924s 1.137136002s 1.137779585s 1.139784235s 1.141340396s 1.147512731s 1.151414426s 1.152963424s 1.153006311s 1.153461539s 1.15363859s 1.159214721s 1.167296258s 1.169903777s 1.171623044s 1.174692671s 1.176312571s 1.178102275s 1.179727483s 1.193556669s 1.20671715s 1.214876079s 1.228875442s] May 30 00:15:01.371: INFO: 50 %ile: 1.016117556s May 30 00:15:01.371: INFO: 90 %ile: 1.139784235s May 30 00:15:01.371: INFO: 99 %ile: 1.214876079s May 30 00:15:01.371: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:15:01.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9420" for this suite. • [SLOW TEST:19.018 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":108,"skipped":1558,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:15:01.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:15:01.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1278" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":109,"skipped":1567,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:15:01.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:15:01.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 30 00:15:02.257: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-30T00:15:02Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-30T00:15:02Z]] name:name1 resourceVersion:8738750 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:563a4667-e049-422a-89de-f5f5bacf676e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 30 00:15:12.269: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-30T00:15:12Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-30T00:15:12Z]] name:name2 resourceVersion:8739222 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c0adb658-2e77-44bf-8c62-96986c40c031] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 30 00:15:22.302: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-30T00:15:02Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-30T00:15:22Z]] name:name1 resourceVersion:8739711 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:563a4667-e049-422a-89de-f5f5bacf676e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 30 00:15:32.310: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-30T00:15:12Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-30T00:15:32Z]] name:name2 resourceVersion:8739970 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c0adb658-2e77-44bf-8c62-96986c40c031] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 30 00:15:42.318: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-30T00:15:02Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-30T00:15:22Z]] name:name1 resourceVersion:8740000 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:563a4667-e049-422a-89de-f5f5bacf676e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 30 00:15:52.328: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-30T00:15:12Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-30T00:15:32Z]] name:name2 resourceVersion:8740030 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c0adb658-2e77-44bf-8c62-96986c40c031] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:16:02.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4938" for this suite. • [SLOW TEST:61.303 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":110,"skipped":1578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:16:02.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6422.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6422.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 00:16:08.974: INFO: DNS probes using dns-6422/dns-test-9e080588-8737-45c7-b3f0-470066da6969 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:16:09.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6422" for this suite. • [SLOW TEST:6.200 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":111,"skipped":1602,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:16:09.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0530 00:16:19.396950 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 30 00:16:19.397: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:16:19.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4253" for this suite. • [SLOW TEST:10.354 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":112,"skipped":1612,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:16:19.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 30 00:16:19.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9244' May 30 00:16:24.994: INFO: stderr: "" May 30 00:16:24.994: INFO: stdout: "pod/pause created\n" May 30 00:16:24.994: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 30 00:16:24.994: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9244" to be "running and ready" May 30 00:16:25.054: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 59.893489ms May 30 00:16:27.058: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063552534s May 30 00:16:29.063: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.068593511s May 30 00:16:29.063: INFO: Pod "pause" satisfied condition "running and ready" May 30 00:16:29.063: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 30 00:16:29.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9244' May 30 00:16:29.165: INFO: stderr: "" May 30 00:16:29.165: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 30 00:16:29.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9244' May 30 00:16:29.312: INFO: stderr: "" May 30 00:16:29.312: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 30 00:16:29.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9244' May 30 00:16:29.424: INFO: stderr: "" May 30 00:16:29.424: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 30 00:16:29.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9244' May 30 00:16:29.536: INFO: stderr: "" May 30 00:16:29.536: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 30 00:16:29.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9244' May 30 00:16:29.683: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 00:16:29.683: INFO: stdout: "pod \"pause\" force deleted\n" May 30 00:16:29.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9244' May 30 00:16:29.781: INFO: stderr: "No resources found in kubectl-9244 namespace.\n" May 30 00:16:29.781: INFO: stdout: "" May 30 00:16:29.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9244 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 30 00:16:29.882: INFO: stderr: "" May 30 00:16:29.882: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:16:29.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9244" for this suite. • [SLOW TEST:10.483 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":113,"skipped":1699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:16:29.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 30 00:16:34.510: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:16:34.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2200" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":114,"skipped":1780,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:16:34.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 30 00:16:34.881: INFO: Waiting up to 1m0s for all nodes to be ready May 30 00:17:34.903: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:17:34.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 30 00:17:39.142: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:17:59.315: INFO: pods created so far: [1 1 1] May 30 00:17:59.315: INFO: length of pods created so far: 3 May 30 00:18:09.324: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:18:16.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6336" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:18:16.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-175" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:101.918 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":115,"skipped":1792,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:18:16.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 30 00:18:16.584: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 30 00:18:16.600: INFO: Waiting for terminating namespaces to be deleted... May 30 00:18:16.603: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 30 00:18:16.608: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 30 00:18:16.608: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 30 00:18:16.608: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 30 00:18:16.608: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 30 00:18:16.608: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 30 00:18:16.608: INFO: Container kindnet-cni ready: true, restart count 2 May 30 00:18:16.608: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 30 00:18:16.608: INFO: Container kube-proxy ready: true, restart count 0 May 30 00:18:16.608: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 30 00:18:16.614: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 30 00:18:16.614: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 30 00:18:16.614: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 30 00:18:16.614: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 30 00:18:16.614: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 30 00:18:16.614: INFO: Container kindnet-cni ready: true, restart count 2 May 30 00:18:16.614: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 30 00:18:16.614: INFO: Container kube-proxy ready: true, restart count 0 May 30 00:18:16.614: INFO: pod4 from sched-preemption-path-6336 started at 2020-05-30 00:18:07 +0000 UTC (1 container statuses recorded) May 30 00:18:16.614: INFO: Container pod4 ready: true, restart count 0 May 30 00:18:16.614: INFO: rs-pod3-wqxzd from sched-preemption-path-6336 started at 2020-05-30 00:17:55 +0000 UTC (1 container statuses recorded) May 30 00:18:16.614: INFO: Container pod3 ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1613a643aa176d28], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1613a643ad5b760d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:18:23.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2391" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.451 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":116,"skipped":1793,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:18:23.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 00:18:24.696: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 00:18:26.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726394704, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726394704, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726394704, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726394704, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 00:18:29.758: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:18:29.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8251" for this suite. STEP: Destroying namespace "webhook-8251-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.010 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":117,"skipped":1812,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:18:29.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 30 00:18:30.136: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9055 /api/v1/namespaces/watch-9055/configmaps/e2e-watch-test-resource-version 3ab566f3-46ec-443a-bd66-7156d35e6aa1 8740889 0 2020-05-30 00:18:30 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-30 00:18:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 30 00:18:30.136: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9055 /api/v1/namespaces/watch-9055/configmaps/e2e-watch-test-resource-version 3ab566f3-46ec-443a-bd66-7156d35e6aa1 8740890 0 2020-05-30 00:18:30 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-30 00:18:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:18:30.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9055" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":118,"skipped":1865,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:18:30.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-7bdc STEP: Creating a pod to test atomic-volume-subpath May 30 00:18:30.396: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7bdc" in namespace "subpath-170" to be "Succeeded or Failed" May 30 00:18:30.400: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.340253ms May 30 00:18:32.404: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007993786s May 30 00:18:34.409: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Running", Reason="", readiness=true. Elapsed: 4.013075219s May 30 00:18:36.414: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Running", Reason="", readiness=true. Elapsed: 6.017688198s May 30 00:18:38.418: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Running", Reason="", readiness=true. Elapsed: 8.022143871s May 30 00:18:40.423: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Running", Reason="", readiness=true. Elapsed: 10.026503268s May 30 00:18:42.427: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Running", Reason="", readiness=true. Elapsed: 12.031173018s May 30 00:18:44.431: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Running", Reason="", readiness=true. Elapsed: 14.034948901s May 30 00:18:46.436: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Running", Reason="", readiness=true. Elapsed: 16.039479797s May 30 00:18:48.440: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Running", Reason="", readiness=true. Elapsed: 18.043961335s May 30 00:18:50.468: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Running", Reason="", readiness=true. Elapsed: 20.072088142s May 30 00:18:52.473: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Running", Reason="", readiness=true. Elapsed: 22.076773984s May 30 00:18:54.477: INFO: Pod "pod-subpath-test-projected-7bdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.080492634s STEP: Saw pod success May 30 00:18:54.477: INFO: Pod "pod-subpath-test-projected-7bdc" satisfied condition "Succeeded or Failed" May 30 00:18:54.479: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-7bdc container test-container-subpath-projected-7bdc: STEP: delete the pod May 30 00:18:54.531: INFO: Waiting for pod pod-subpath-test-projected-7bdc to disappear May 30 00:18:54.556: INFO: Pod pod-subpath-test-projected-7bdc no longer exists STEP: Deleting pod pod-subpath-test-projected-7bdc May 30 00:18:54.556: INFO: Deleting pod "pod-subpath-test-projected-7bdc" in namespace "subpath-170" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:18:54.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-170" for this suite. • [SLOW TEST:24.407 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":119,"skipped":1877,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:18:54.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-7bdcfbe1-dda9-49dc-8ddf-4f6cb79b1a8f in namespace container-probe-7653 May 30 00:18:58.773: INFO: Started pod liveness-7bdcfbe1-dda9-49dc-8ddf-4f6cb79b1a8f in namespace container-probe-7653 STEP: checking the pod's current state and verifying that restartCount is present May 30 00:18:58.776: INFO: Initial restart count of pod liveness-7bdcfbe1-dda9-49dc-8ddf-4f6cb79b1a8f is 0 May 30 00:19:14.818: INFO: Restart count of pod container-probe-7653/liveness-7bdcfbe1-dda9-49dc-8ddf-4f6cb79b1a8f is now 1 (16.041192732s elapsed) May 30 00:19:34.934: INFO: Restart count of pod container-probe-7653/liveness-7bdcfbe1-dda9-49dc-8ddf-4f6cb79b1a8f is now 2 (36.157762199s elapsed) May 30 00:19:54.988: INFO: Restart count of pod container-probe-7653/liveness-7bdcfbe1-dda9-49dc-8ddf-4f6cb79b1a8f is now 3 (56.211109799s elapsed) May 30 00:20:15.059: INFO: Restart count of pod container-probe-7653/liveness-7bdcfbe1-dda9-49dc-8ddf-4f6cb79b1a8f is now 4 (1m16.282364773s elapsed) May 30 00:21:23.547: INFO: Restart count of pod container-probe-7653/liveness-7bdcfbe1-dda9-49dc-8ddf-4f6cb79b1a8f is now 5 (2m24.770641415s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:21:23.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7653" for this suite. • [SLOW TEST:149.029 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":120,"skipped":1894,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:21:23.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-bef9f56c-6254-4991-b0c6-d3e1d63558bd STEP: Creating a pod to test consume configMaps May 30 00:21:23.982: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-62774e7a-acdc-4154-b1aa-66e75adf58ef" in namespace "projected-1447" to be "Succeeded or Failed" May 30 00:21:24.142: INFO: Pod "pod-projected-configmaps-62774e7a-acdc-4154-b1aa-66e75adf58ef": Phase="Pending", Reason="", readiness=false. Elapsed: 159.481051ms May 30 00:21:26.146: INFO: Pod "pod-projected-configmaps-62774e7a-acdc-4154-b1aa-66e75adf58ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163684559s May 30 00:21:28.151: INFO: Pod "pod-projected-configmaps-62774e7a-acdc-4154-b1aa-66e75adf58ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168677214s STEP: Saw pod success May 30 00:21:28.151: INFO: Pod "pod-projected-configmaps-62774e7a-acdc-4154-b1aa-66e75adf58ef" satisfied condition "Succeeded or Failed" May 30 00:21:28.154: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-62774e7a-acdc-4154-b1aa-66e75adf58ef container projected-configmap-volume-test: STEP: delete the pod May 30 00:21:28.340: INFO: Waiting for pod pod-projected-configmaps-62774e7a-acdc-4154-b1aa-66e75adf58ef to disappear May 30 00:21:28.362: INFO: Pod pod-projected-configmaps-62774e7a-acdc-4154-b1aa-66e75adf58ef no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:21:28.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1447" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":121,"skipped":1908,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:21:28.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:21:28.469: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 30 00:21:33.482: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 30 00:21:33.482: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 30 00:21:33.562: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5423 /apis/apps/v1/namespaces/deployment-5423/deployments/test-cleanup-deployment f122ba55-b5d2-4bbe-b3a1-feb7ad116d33 8741565 1 2020-05-30 00:21:33 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-30 00:21:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039a34b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 30 00:21:33.652: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-5423 /apis/apps/v1/namespaces/deployment-5423/replicasets/test-cleanup-deployment-6688745694 74a82920-aa85-4990-bbdf-bbef90ccd5ce 8741573 1 2020-05-30 00:21:33 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment f122ba55-b5d2-4bbe-b3a1-feb7ad116d33 0xc0039a3977 0xc0039a3978}] [] [{kube-controller-manager Update apps/v1 2020-05-30 00:21:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f122ba55-b5d2-4bbe-b3a1-feb7ad116d33\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039a3a08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 00:21:33.652: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 30 00:21:33.652: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5423 /apis/apps/v1/namespaces/deployment-5423/replicasets/test-cleanup-controller bdb65fe6-594b-4126-be0d-1186047815c8 8741566 1 2020-05-30 00:21:28 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment f122ba55-b5d2-4bbe-b3a1-feb7ad116d33 0xc0039a385f 0xc0039a3870}] [] [{e2e.test Update apps/v1 2020-05-30 00:21:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-30 00:21:33 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"f122ba55-b5d2-4bbe-b3a1-feb7ad116d33\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0039a3908 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 30 00:21:33.704: INFO: Pod "test-cleanup-controller-fvhhm" is available: &Pod{ObjectMeta:{test-cleanup-controller-fvhhm test-cleanup-controller- deployment-5423 /api/v1/namespaces/deployment-5423/pods/test-cleanup-controller-fvhhm 54c24512-8e3a-40b1-98ee-f321d330b34d 8741556 0 2020-05-30 00:21:28 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller bdb65fe6-594b-4126-be0d-1186047815c8 0xc0039a3ec7 0xc0039a3ec8}] [] [{kube-controller-manager Update v1 2020-05-30 00:21:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdb65fe6-594b-4126-be0d-1186047815c8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:21:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.126\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7vn4k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7vn4k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7vn4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:21:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:21:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.126,StartTime:2020-05-30 00:21:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 00:21:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4c51cb69c04ab43af7c84519f5e04d08e57fe618aa434e3a62f5f37e7a32e863,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 30 00:21:33.705: INFO: Pod "test-cleanup-deployment-6688745694-vxkm4" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-vxkm4 test-cleanup-deployment-6688745694- deployment-5423 /api/v1/namespaces/deployment-5423/pods/test-cleanup-deployment-6688745694-vxkm4 c207146b-78a0-4a70-822c-ff31638ab924 8741572 0 2020-05-30 00:21:33 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 74a82920-aa85-4990-bbdf-bbef90ccd5ce 0xc003682087 0xc003682088}] [] [{kube-controller-manager Update v1 2020-05-30 00:21:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74a82920-aa85-4990-bbdf-bbef90ccd5ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7vn4k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7vn4k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7vn4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:21:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:21:33.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5423" for this suite. • [SLOW TEST:5.405 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":122,"skipped":1913,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:21:33.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-9e0eed09-4685-42bc-8b0c-12b9f928d2a3 STEP: Creating a pod to test consume configMaps May 30 00:21:33.931: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5749a48f-8fa6-424b-8ac2-beb9ca0418a4" in namespace "projected-1657" to be "Succeeded or Failed" May 30 00:21:33.967: INFO: Pod "pod-projected-configmaps-5749a48f-8fa6-424b-8ac2-beb9ca0418a4": Phase="Pending", Reason="", readiness=false. Elapsed: 36.094029ms May 30 00:21:35.981: INFO: Pod "pod-projected-configmaps-5749a48f-8fa6-424b-8ac2-beb9ca0418a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049460476s May 30 00:21:38.092: INFO: Pod "pod-projected-configmaps-5749a48f-8fa6-424b-8ac2-beb9ca0418a4": Phase="Running", Reason="", readiness=true. Elapsed: 4.160523657s May 30 00:21:40.096: INFO: Pod "pod-projected-configmaps-5749a48f-8fa6-424b-8ac2-beb9ca0418a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.165214249s STEP: Saw pod success May 30 00:21:40.096: INFO: Pod "pod-projected-configmaps-5749a48f-8fa6-424b-8ac2-beb9ca0418a4" satisfied condition "Succeeded or Failed" May 30 00:21:40.099: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-5749a48f-8fa6-424b-8ac2-beb9ca0418a4 container projected-configmap-volume-test: STEP: delete the pod May 30 00:21:40.196: INFO: Waiting for pod pod-projected-configmaps-5749a48f-8fa6-424b-8ac2-beb9ca0418a4 to disappear May 30 00:21:40.201: INFO: Pod pod-projected-configmaps-5749a48f-8fa6-424b-8ac2-beb9ca0418a4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:21:40.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1657" for this suite. • [SLOW TEST:6.434 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":123,"skipped":1929,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:21:40.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-03f3e67b-226d-43c6-95b9-02706797e5f0 in namespace container-probe-1436 May 30 00:21:44.331: INFO: Started pod liveness-03f3e67b-226d-43c6-95b9-02706797e5f0 in namespace container-probe-1436 STEP: checking the pod's current state and verifying that restartCount is present May 30 00:21:44.335: INFO: Initial restart count of pod liveness-03f3e67b-226d-43c6-95b9-02706797e5f0 is 0 May 30 00:22:06.412: INFO: Restart count of pod container-probe-1436/liveness-03f3e67b-226d-43c6-95b9-02706797e5f0 is now 1 (22.077851423s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:22:06.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1436" for this suite. • [SLOW TEST:26.286 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":124,"skipped":1971,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:22:06.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0530 00:22:08.074417 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 30 00:22:08.074: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:22:08.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5901" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":125,"skipped":1991,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:22:08.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-24e471e7-d722-46ee-80e5-795971b5d9af STEP: Creating the pod STEP: Updating configmap configmap-test-upd-24e471e7-d722-46ee-80e5-795971b5d9af STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:23:24.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2539" for this suite. • [SLOW TEST:76.582 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":126,"skipped":2002,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:23:24.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-b5a254b5-d9a5-4027-8f5f-c1cb8f86b818 STEP: Creating a pod to test consume secrets May 30 00:23:24.788: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-50c493b7-6945-44a1-9a45-9aea0df49e6e" in namespace "projected-159" to be "Succeeded or Failed" May 30 00:23:24.809: INFO: Pod "pod-projected-secrets-50c493b7-6945-44a1-9a45-9aea0df49e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.618007ms May 30 00:23:26.813: INFO: Pod "pod-projected-secrets-50c493b7-6945-44a1-9a45-9aea0df49e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024591328s May 30 00:23:28.817: INFO: Pod "pod-projected-secrets-50c493b7-6945-44a1-9a45-9aea0df49e6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029088655s STEP: Saw pod success May 30 00:23:28.817: INFO: Pod "pod-projected-secrets-50c493b7-6945-44a1-9a45-9aea0df49e6e" satisfied condition "Succeeded or Failed" May 30 00:23:28.820: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-50c493b7-6945-44a1-9a45-9aea0df49e6e container projected-secret-volume-test: STEP: delete the pod May 30 00:23:28.872: INFO: Waiting for pod pod-projected-secrets-50c493b7-6945-44a1-9a45-9aea0df49e6e to disappear May 30 00:23:28.881: INFO: Pod pod-projected-secrets-50c493b7-6945-44a1-9a45-9aea0df49e6e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:23:28.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-159" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":127,"skipped":2028,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:23:28.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 00:23:29.401: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 00:23:32.052: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395009, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395009, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395009, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395009, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 00:23:35.082: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:23:35.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6180" for this suite. STEP: Destroying namespace "webhook-6180-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.472 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":128,"skipped":2036,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:23:35.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 30 00:23:35.480: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:23:51.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3306" for this suite. • [SLOW TEST:15.680 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":129,"skipped":2048,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:23:51.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 00:23:51.684: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 00:23:53.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395031, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395031, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395031, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395031, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 00:23:56.750: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:23:56.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9755" for this suite. STEP: Destroying namespace "webhook-9755-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.908 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":130,"skipped":2068,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:23:56.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-036f1e4d-f69f-4e87-9a4a-1fe52efa8c31 STEP: Creating a pod to test consume secrets May 30 00:23:57.048: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-46e2d0f4-7801-4bca-9082-8c4b2d16124e" in namespace "projected-5556" to be "Succeeded or Failed" May 30 00:23:57.079: INFO: Pod "pod-projected-secrets-46e2d0f4-7801-4bca-9082-8c4b2d16124e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.160755ms May 30 00:23:59.090: INFO: Pod "pod-projected-secrets-46e2d0f4-7801-4bca-9082-8c4b2d16124e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042233845s May 30 00:24:01.095: INFO: Pod "pod-projected-secrets-46e2d0f4-7801-4bca-9082-8c4b2d16124e": Phase="Running", Reason="", readiness=true. Elapsed: 4.047078523s May 30 00:24:03.099: INFO: Pod "pod-projected-secrets-46e2d0f4-7801-4bca-9082-8c4b2d16124e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051258406s STEP: Saw pod success May 30 00:24:03.100: INFO: Pod "pod-projected-secrets-46e2d0f4-7801-4bca-9082-8c4b2d16124e" satisfied condition "Succeeded or Failed" May 30 00:24:03.102: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-46e2d0f4-7801-4bca-9082-8c4b2d16124e container projected-secret-volume-test: STEP: delete the pod May 30 00:24:03.184: INFO: Waiting for pod pod-projected-secrets-46e2d0f4-7801-4bca-9082-8c4b2d16124e to disappear May 30 00:24:03.200: INFO: Pod pod-projected-secrets-46e2d0f4-7801-4bca-9082-8c4b2d16124e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:24:03.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5556" for this suite. • [SLOW TEST:6.261 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":131,"skipped":2077,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:24:03.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 30 00:24:03.330: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 30 00:24:03.350: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 30 00:24:03.350: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 30 00:24:03.371: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 30 00:24:03.371: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 30 00:24:03.506: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 30 00:24:03.507: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 30 00:24:11.481: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:24:11.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-9032" for this suite. • [SLOW TEST:8.325 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":132,"skipped":2155,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:24:11.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:24:11.634: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 30 00:24:11.658: INFO: Pod name sample-pod: Found 0 pods out of 1 May 30 00:24:16.795: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 30 00:24:16.795: INFO: Creating deployment "test-rolling-update-deployment" May 30 00:24:16.843: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 30 00:24:16.888: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 30 00:24:18.911: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 30 00:24:18.938: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395057, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395057, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395058, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395057, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 00:24:20.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395057, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395057, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395058, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726395057, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 00:24:22.942: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 30 00:24:22.952: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4311 /apis/apps/v1/namespaces/deployment-4311/deployments/test-rolling-update-deployment 54a08d18-1d59-40e9-b425-3a5295770725 8742537 1 2020-05-30 00:24:16 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-30 00:24:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-30 00:24:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035de7d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-30 00:24:17 +0000 UTC,LastTransitionTime:2020-05-30 00:24:17 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-30 00:24:22 +0000 UTC,LastTransitionTime:2020-05-30 00:24:17 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 30 00:24:22.973: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-4311 /apis/apps/v1/namespaces/deployment-4311/replicasets/test-rolling-update-deployment-df7bb669b bbd7454d-aba7-4e33-a2ed-87cdf5d09ae9 8742525 1 2020-05-30 00:24:16 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 54a08d18-1d59-40e9-b425-3a5295770725 0xc0035df040 0xc0035df041}] [] [{kube-controller-manager Update apps/v1 2020-05-30 00:24:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54a08d18-1d59-40e9-b425-3a5295770725\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035df0c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 30 00:24:22.973: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 30 00:24:22.973: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4311 /apis/apps/v1/namespaces/deployment-4311/replicasets/test-rolling-update-controller 38a57f08-cd73-442b-88e5-ae75fd7632b8 8742535 2 2020-05-30 00:24:11 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 54a08d18-1d59-40e9-b425-3a5295770725 0xc0035deea7 0xc0035deea8}] [] [{e2e.test Update apps/v1 2020-05-30 00:24:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-30 00:24:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54a08d18-1d59-40e9-b425-3a5295770725\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0035defb8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 00:24:22.976: INFO: Pod "test-rolling-update-deployment-df7bb669b-6bcp8" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-6bcp8 test-rolling-update-deployment-df7bb669b- deployment-4311 /api/v1/namespaces/deployment-4311/pods/test-rolling-update-deployment-df7bb669b-6bcp8 412ac0df-0d24-49e4-b20d-ef55b7cd83bd 8742524 0 2020-05-30 00:24:17 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b bbd7454d-aba7-4e33-a2ed-87cdf5d09ae9 0xc0035df660 0xc0035df661}] [] [{kube-controller-manager Update v1 2020-05-30 00:24:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbd7454d-aba7-4e33-a2ed-87cdf5d09ae9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:24:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.136\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cxfww,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cxfww,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cxfww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:24:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:24:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:24:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:24:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.136,StartTime:2020-05-30 00:24:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 00:24:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://2b162ddbd54d506493352d811a617894e3c988c90937ef2835810f10285bec3d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:24:22.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4311" for this suite. • [SLOW TEST:11.457 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":133,"skipped":2163,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:24:22.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 30 00:24:27.078: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3335 PodName:var-expansion-fc40bef5-4239-4025-93af-eee95d9fed47 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:24:27.078: INFO: >>> kubeConfig: /root/.kube/config I0530 00:24:27.107878 7 log.go:172] (0xc00536e6e0) (0xc0016de320) Create stream I0530 00:24:27.107917 7 log.go:172] (0xc00536e6e0) (0xc0016de320) Stream added, broadcasting: 1 I0530 00:24:27.109948 7 log.go:172] (0xc00536e6e0) Reply frame received for 1 I0530 00:24:27.109984 7 log.go:172] (0xc00536e6e0) (0xc001211d60) Create stream I0530 00:24:27.109997 7 log.go:172] (0xc00536e6e0) (0xc001211d60) Stream added, broadcasting: 3 I0530 00:24:27.110748 7 log.go:172] (0xc00536e6e0) Reply frame received for 3 I0530 00:24:27.110784 7 log.go:172] (0xc00536e6e0) (0xc0003ef220) Create stream I0530 00:24:27.110796 7 log.go:172] (0xc00536e6e0) (0xc0003ef220) Stream added, broadcasting: 5 I0530 00:24:27.111521 7 log.go:172] (0xc00536e6e0) Reply frame received for 5 I0530 00:24:27.197921 7 log.go:172] (0xc00536e6e0) Data frame received for 3 I0530 00:24:27.197950 7 log.go:172] (0xc001211d60) (3) Data frame handling I0530 00:24:27.197972 7 log.go:172] (0xc00536e6e0) Data frame received for 5 I0530 00:24:27.197978 7 log.go:172] (0xc0003ef220) (5) Data frame handling I0530 00:24:27.199406 7 log.go:172] (0xc00536e6e0) Data frame received for 1 I0530 00:24:27.199428 7 log.go:172] (0xc0016de320) (1) Data frame handling I0530 00:24:27.199439 7 log.go:172] (0xc0016de320) (1) Data frame sent I0530 00:24:27.199449 7 log.go:172] (0xc00536e6e0) (0xc0016de320) Stream removed, broadcasting: 1 I0530 00:24:27.199507 7 log.go:172] (0xc00536e6e0) Go away received I0530 00:24:27.199558 7 log.go:172] (0xc00536e6e0) (0xc0016de320) Stream removed, broadcasting: 1 I0530 00:24:27.199584 7 log.go:172] (0xc00536e6e0) (0xc001211d60) Stream removed, broadcasting: 3 I0530 00:24:27.199602 7 log.go:172] (0xc00536e6e0) (0xc0003ef220) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 30 00:24:27.208: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3335 PodName:var-expansion-fc40bef5-4239-4025-93af-eee95d9fed47 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:24:27.208: INFO: >>> kubeConfig: /root/.kube/config I0530 00:24:27.236440 7 log.go:172] (0xc00411aa50) (0xc000d00820) Create stream I0530 00:24:27.236468 7 log.go:172] (0xc00411aa50) (0xc000d00820) Stream added, broadcasting: 1 I0530 00:24:27.238835 7 log.go:172] (0xc00411aa50) Reply frame received for 1 I0530 00:24:27.238886 7 log.go:172] (0xc00411aa50) (0xc0016de460) Create stream I0530 00:24:27.238900 7 log.go:172] (0xc00411aa50) (0xc0016de460) Stream added, broadcasting: 3 I0530 00:24:27.239837 7 log.go:172] (0xc00411aa50) Reply frame received for 3 I0530 00:24:27.239874 7 log.go:172] (0xc00411aa50) (0xc0016de5a0) Create stream I0530 00:24:27.239884 7 log.go:172] (0xc00411aa50) (0xc0016de5a0) Stream added, broadcasting: 5 I0530 00:24:27.240798 7 log.go:172] (0xc00411aa50) Reply frame received for 5 I0530 00:24:27.291865 7 log.go:172] (0xc00411aa50) Data frame received for 3 I0530 00:24:27.291894 7 log.go:172] (0xc0016de460) (3) Data frame handling I0530 00:24:27.291934 7 log.go:172] (0xc00411aa50) Data frame received for 5 I0530 00:24:27.291965 7 log.go:172] (0xc0016de5a0) (5) Data frame handling I0530 00:24:27.293866 7 log.go:172] (0xc00411aa50) Data frame received for 1 I0530 00:24:27.293885 7 log.go:172] (0xc000d00820) (1) Data frame handling I0530 00:24:27.293896 7 log.go:172] (0xc000d00820) (1) Data frame sent I0530 00:24:27.293981 7 log.go:172] (0xc00411aa50) (0xc000d00820) Stream removed, broadcasting: 1 I0530 00:24:27.294039 7 log.go:172] (0xc00411aa50) Go away received I0530 00:24:27.294120 7 log.go:172] (0xc00411aa50) (0xc000d00820) Stream removed, broadcasting: 1 I0530 00:24:27.294138 7 log.go:172] (0xc00411aa50) (0xc0016de460) Stream removed, broadcasting: 3 I0530 00:24:27.294146 7 log.go:172] (0xc00411aa50) (0xc0016de5a0) Stream removed, broadcasting: 5 STEP: updating the annotation value May 30 00:24:27.805: INFO: Successfully updated pod "var-expansion-fc40bef5-4239-4025-93af-eee95d9fed47" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 30 00:24:27.836: INFO: Deleting pod "var-expansion-fc40bef5-4239-4025-93af-eee95d9fed47" in namespace "var-expansion-3335" May 30 00:24:27.842: INFO: Wait up to 5m0s for pod "var-expansion-fc40bef5-4239-4025-93af-eee95d9fed47" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:25:05.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3335" for this suite. • [SLOW TEST:42.878 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":134,"skipped":2169,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:25:05.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-4de15518-4bb7-46c7-be16-ee61b3a6d424 in namespace container-probe-7053 May 30 00:25:09.998: INFO: Started pod busybox-4de15518-4bb7-46c7-be16-ee61b3a6d424 in namespace container-probe-7053 STEP: checking the pod's current state and verifying that restartCount is present May 30 00:25:10.001: INFO: Initial restart count of pod busybox-4de15518-4bb7-46c7-be16-ee61b3a6d424 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:29:10.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7053" for this suite. • [SLOW TEST:245.022 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2172,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:29:10.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-78200a21-7832-4e35-af85-73145a52aa21 STEP: Creating a pod to test consume configMaps May 30 00:29:10.979: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-754942f6-4d2e-4188-8a60-dfcc71ed0352" in namespace "projected-9052" to be "Succeeded or Failed" May 30 00:29:10.999: INFO: Pod "pod-projected-configmaps-754942f6-4d2e-4188-8a60-dfcc71ed0352": Phase="Pending", Reason="", readiness=false. Elapsed: 19.190747ms May 30 00:29:13.003: INFO: Pod "pod-projected-configmaps-754942f6-4d2e-4188-8a60-dfcc71ed0352": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023465184s May 30 00:29:15.007: INFO: Pod "pod-projected-configmaps-754942f6-4d2e-4188-8a60-dfcc71ed0352": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027785355s STEP: Saw pod success May 30 00:29:15.007: INFO: Pod "pod-projected-configmaps-754942f6-4d2e-4188-8a60-dfcc71ed0352" satisfied condition "Succeeded or Failed" May 30 00:29:15.010: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-754942f6-4d2e-4188-8a60-dfcc71ed0352 container projected-configmap-volume-test: STEP: delete the pod May 30 00:29:15.071: INFO: Waiting for pod pod-projected-configmaps-754942f6-4d2e-4188-8a60-dfcc71ed0352 to disappear May 30 00:29:15.187: INFO: Pod pod-projected-configmaps-754942f6-4d2e-4188-8a60-dfcc71ed0352 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:29:15.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9052" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":2179,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:29:15.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0530 00:29:16.304656 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 30 00:29:16.304: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:29:16.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9454" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":137,"skipped":2188,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:29:16.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 30 00:29:26.696: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 00:29:26.705: INFO: Pod pod-with-prestop-http-hook still exists May 30 00:29:28.705: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 00:29:28.710: INFO: Pod pod-with-prestop-http-hook still exists May 30 00:29:30.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 00:29:30.710: INFO: Pod pod-with-prestop-http-hook still exists May 30 00:29:32.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 00:29:32.711: INFO: Pod pod-with-prestop-http-hook still exists May 30 00:29:34.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 00:29:34.710: INFO: Pod pod-with-prestop-http-hook still exists May 30 00:29:36.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 30 00:29:36.710: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:29:36.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7874" for this suite. • [SLOW TEST:20.414 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":138,"skipped":2206,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:29:36.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4595 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-4595 I0530 00:29:36.931220 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4595, replica count: 2 I0530 00:29:39.981698 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:29:42.981973 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 00:29:42.982: INFO: Creating new exec pod May 30 00:29:48.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4595 execpodvqhm5 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 30 00:29:51.189: INFO: stderr: "I0530 00:29:51.074720 1485 log.go:172] (0xc000d16840) (0xc000844e60) Create stream\nI0530 00:29:51.074763 1485 log.go:172] (0xc000d16840) (0xc000844e60) Stream added, broadcasting: 1\nI0530 00:29:51.077961 1485 log.go:172] (0xc000d16840) Reply frame received for 1\nI0530 00:29:51.078000 1485 log.go:172] (0xc000d16840) (0xc000871540) Create stream\nI0530 00:29:51.078015 1485 log.go:172] (0xc000d16840) (0xc000871540) Stream added, broadcasting: 3\nI0530 00:29:51.079022 1485 log.go:172] (0xc000d16840) Reply frame received for 3\nI0530 00:29:51.079056 1485 log.go:172] (0xc000d16840) (0xc0008715e0) Create stream\nI0530 00:29:51.079069 1485 log.go:172] (0xc000d16840) (0xc0008715e0) Stream added, broadcasting: 5\nI0530 00:29:51.079960 1485 log.go:172] (0xc000d16840) Reply frame received for 5\nI0530 00:29:51.155769 1485 log.go:172] (0xc000d16840) Data frame received for 5\nI0530 00:29:51.155800 1485 log.go:172] (0xc0008715e0) (5) Data frame handling\nI0530 00:29:51.155817 1485 log.go:172] (0xc0008715e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0530 00:29:51.178534 1485 log.go:172] (0xc000d16840) Data frame received for 5\nI0530 00:29:51.178582 1485 log.go:172] (0xc0008715e0) (5) Data frame handling\nI0530 00:29:51.178751 1485 log.go:172] (0xc0008715e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0530 00:29:51.179532 1485 log.go:172] (0xc000d16840) Data frame received for 3\nI0530 00:29:51.179567 1485 log.go:172] (0xc000871540) (3) Data frame handling\nI0530 00:29:51.179619 1485 log.go:172] (0xc000d16840) Data frame received for 5\nI0530 00:29:51.179719 1485 log.go:172] (0xc0008715e0) (5) Data frame handling\nI0530 00:29:51.182364 1485 log.go:172] (0xc000d16840) Data frame received for 1\nI0530 00:29:51.182401 1485 log.go:172] (0xc000844e60) (1) Data frame handling\nI0530 00:29:51.182424 1485 log.go:172] (0xc000844e60) (1) Data frame sent\nI0530 00:29:51.182450 1485 log.go:172] (0xc000d16840) (0xc000844e60) Stream removed, broadcasting: 1\nI0530 00:29:51.182503 1485 log.go:172] (0xc000d16840) Go away received\nI0530 00:29:51.183035 1485 log.go:172] (0xc000d16840) (0xc000844e60) Stream removed, broadcasting: 1\nI0530 00:29:51.183081 1485 log.go:172] (0xc000d16840) (0xc000871540) Stream removed, broadcasting: 3\nI0530 00:29:51.183107 1485 log.go:172] (0xc000d16840) (0xc0008715e0) Stream removed, broadcasting: 5\n" May 30 00:29:51.189: INFO: stdout: "" May 30 00:29:51.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4595 execpodvqhm5 -- /bin/sh -x -c nc -zv -t -w 2 10.96.69.47 80' May 30 00:29:51.420: INFO: stderr: "I0530 00:29:51.345704 1520 log.go:172] (0xc0009cc000) (0xc00067a5a0) Create stream\nI0530 00:29:51.345763 1520 log.go:172] (0xc0009cc000) (0xc00067a5a0) Stream added, broadcasting: 1\nI0530 00:29:51.347920 1520 log.go:172] (0xc0009cc000) Reply frame received for 1\nI0530 00:29:51.347957 1520 log.go:172] (0xc0009cc000) (0xc000550280) Create stream\nI0530 00:29:51.347968 1520 log.go:172] (0xc0009cc000) (0xc000550280) Stream added, broadcasting: 3\nI0530 00:29:51.349004 1520 log.go:172] (0xc0009cc000) Reply frame received for 3\nI0530 00:29:51.349030 1520 log.go:172] (0xc0009cc000) (0xc00067ae60) Create stream\nI0530 00:29:51.349037 1520 log.go:172] (0xc0009cc000) (0xc00067ae60) Stream added, broadcasting: 5\nI0530 00:29:51.349954 1520 log.go:172] (0xc0009cc000) Reply frame received for 5\nI0530 00:29:51.412647 1520 log.go:172] (0xc0009cc000) Data frame received for 5\nI0530 00:29:51.412684 1520 log.go:172] (0xc00067ae60) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.69.47 80\nConnection to 10.96.69.47 80 port [tcp/http] succeeded!\nI0530 00:29:51.412704 1520 log.go:172] (0xc0009cc000) Data frame received for 3\nI0530 00:29:51.412727 1520 log.go:172] (0xc000550280) (3) Data frame handling\nI0530 00:29:51.412743 1520 log.go:172] (0xc00067ae60) (5) Data frame sent\nI0530 00:29:51.412756 1520 log.go:172] (0xc0009cc000) Data frame received for 5\nI0530 00:29:51.412770 1520 log.go:172] (0xc00067ae60) (5) Data frame handling\nI0530 00:29:51.414410 1520 log.go:172] (0xc0009cc000) Data frame received for 1\nI0530 00:29:51.414444 1520 log.go:172] (0xc00067a5a0) (1) Data frame handling\nI0530 00:29:51.414489 1520 log.go:172] (0xc00067a5a0) (1) Data frame sent\nI0530 00:29:51.414543 1520 log.go:172] (0xc0009cc000) (0xc00067a5a0) Stream removed, broadcasting: 1\nI0530 00:29:51.414579 1520 log.go:172] (0xc0009cc000) Go away received\nI0530 00:29:51.414951 1520 log.go:172] (0xc0009cc000) (0xc00067a5a0) Stream removed, broadcasting: 1\nI0530 00:29:51.414972 1520 log.go:172] (0xc0009cc000) (0xc000550280) Stream removed, broadcasting: 3\nI0530 00:29:51.414984 1520 log.go:172] (0xc0009cc000) (0xc00067ae60) Stream removed, broadcasting: 5\n" May 30 00:29:51.420: INFO: stdout: "" May 30 00:29:51.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4595 execpodvqhm5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30678' May 30 00:29:51.624: INFO: stderr: "I0530 00:29:51.544805 1541 log.go:172] (0xc000616000) (0xc0003ce780) Create stream\nI0530 00:29:51.544874 1541 log.go:172] (0xc000616000) (0xc0003ce780) Stream added, broadcasting: 1\nI0530 00:29:51.547416 1541 log.go:172] (0xc000616000) Reply frame received for 1\nI0530 00:29:51.547452 1541 log.go:172] (0xc000616000) (0xc00035e6e0) Create stream\nI0530 00:29:51.547462 1541 log.go:172] (0xc000616000) (0xc00035e6e0) Stream added, broadcasting: 3\nI0530 00:29:51.548383 1541 log.go:172] (0xc000616000) Reply frame received for 3\nI0530 00:29:51.548422 1541 log.go:172] (0xc000616000) (0xc0002f32c0) Create stream\nI0530 00:29:51.548437 1541 log.go:172] (0xc000616000) (0xc0002f32c0) Stream added, broadcasting: 5\nI0530 00:29:51.549596 1541 log.go:172] (0xc000616000) Reply frame received for 5\nI0530 00:29:51.616311 1541 log.go:172] (0xc000616000) Data frame received for 5\nI0530 00:29:51.616368 1541 log.go:172] (0xc0002f32c0) (5) Data frame handling\nI0530 00:29:51.616402 1541 log.go:172] (0xc0002f32c0) (5) Data frame sent\nI0530 00:29:51.616428 1541 log.go:172] (0xc000616000) Data frame received for 5\nI0530 00:29:51.616445 1541 log.go:172] (0xc0002f32c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30678\nConnection to 172.17.0.13 30678 port [tcp/30678] succeeded!\nI0530 00:29:51.616668 1541 log.go:172] (0xc000616000) Data frame received for 3\nI0530 00:29:51.616696 1541 log.go:172] (0xc00035e6e0) (3) Data frame handling\nI0530 00:29:51.618517 1541 log.go:172] (0xc000616000) Data frame received for 1\nI0530 00:29:51.618543 1541 log.go:172] (0xc0003ce780) (1) Data frame handling\nI0530 00:29:51.618574 1541 log.go:172] (0xc0003ce780) (1) Data frame sent\nI0530 00:29:51.618793 1541 log.go:172] (0xc000616000) (0xc0003ce780) Stream removed, broadcasting: 1\nI0530 00:29:51.618957 1541 log.go:172] (0xc000616000) Go away received\nI0530 00:29:51.619328 1541 log.go:172] (0xc000616000) (0xc0003ce780) Stream removed, broadcasting: 1\nI0530 00:29:51.619355 1541 log.go:172] (0xc000616000) (0xc00035e6e0) Stream removed, broadcasting: 3\nI0530 00:29:51.619378 1541 log.go:172] (0xc000616000) (0xc0002f32c0) Stream removed, broadcasting: 5\n" May 30 00:29:51.624: INFO: stdout: "" May 30 00:29:51.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4595 execpodvqhm5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30678' May 30 00:29:51.843: INFO: stderr: "I0530 00:29:51.769616 1561 log.go:172] (0xc000a8d080) (0xc000706e60) Create stream\nI0530 00:29:51.769662 1561 log.go:172] (0xc000a8d080) (0xc000706e60) Stream added, broadcasting: 1\nI0530 00:29:51.774890 1561 log.go:172] (0xc000a8d080) Reply frame received for 1\nI0530 00:29:51.774929 1561 log.go:172] (0xc000a8d080) (0xc0006e5b80) Create stream\nI0530 00:29:51.774940 1561 log.go:172] (0xc000a8d080) (0xc0006e5b80) Stream added, broadcasting: 3\nI0530 00:29:51.776065 1561 log.go:172] (0xc000a8d080) Reply frame received for 3\nI0530 00:29:51.776137 1561 log.go:172] (0xc000a8d080) (0xc0006b0460) Create stream\nI0530 00:29:51.776163 1561 log.go:172] (0xc000a8d080) (0xc0006b0460) Stream added, broadcasting: 5\nI0530 00:29:51.777452 1561 log.go:172] (0xc000a8d080) Reply frame received for 5\nI0530 00:29:51.837065 1561 log.go:172] (0xc000a8d080) Data frame received for 5\nI0530 00:29:51.837095 1561 log.go:172] (0xc0006b0460) (5) Data frame handling\nI0530 00:29:51.837103 1561 log.go:172] (0xc0006b0460) (5) Data frame sent\nI0530 00:29:51.837108 1561 log.go:172] (0xc000a8d080) Data frame received for 5\nI0530 00:29:51.837248 1561 log.go:172] (0xc0006b0460) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30678\nConnection to 172.17.0.12 30678 port [tcp/30678] succeeded!\nI0530 00:29:51.837265 1561 log.go:172] (0xc000a8d080) Data frame received for 3\nI0530 00:29:51.837270 1561 log.go:172] (0xc0006e5b80) (3) Data frame handling\nI0530 00:29:51.838795 1561 log.go:172] (0xc000a8d080) Data frame received for 1\nI0530 00:29:51.838818 1561 log.go:172] (0xc000706e60) (1) Data frame handling\nI0530 00:29:51.838829 1561 log.go:172] (0xc000706e60) (1) Data frame sent\nI0530 00:29:51.838841 1561 log.go:172] (0xc000a8d080) (0xc000706e60) Stream removed, broadcasting: 1\nI0530 00:29:51.838894 1561 log.go:172] (0xc000a8d080) Go away received\nI0530 00:29:51.839064 1561 log.go:172] (0xc000a8d080) (0xc000706e60) Stream removed, broadcasting: 1\nI0530 00:29:51.839079 1561 log.go:172] (0xc000a8d080) (0xc0006e5b80) Stream removed, broadcasting: 3\nI0530 00:29:51.839086 1561 log.go:172] (0xc000a8d080) (0xc0006b0460) Stream removed, broadcasting: 5\n" May 30 00:29:51.843: INFO: stdout: "" May 30 00:29:51.843: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:29:51.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4595" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:15.262 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":139,"skipped":2207,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:29:51.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 30 00:29:52.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6101' May 30 00:29:54.445: INFO: stderr: "" May 30 00:29:54.445: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 30 00:29:55.451: INFO: Selector matched 1 pods for map[app:agnhost] May 30 00:29:55.451: INFO: Found 0 / 1 May 30 00:29:56.449: INFO: Selector matched 1 pods for map[app:agnhost] May 30 00:29:56.449: INFO: Found 0 / 1 May 30 00:29:57.516: INFO: Selector matched 1 pods for map[app:agnhost] May 30 00:29:57.516: INFO: Found 0 / 1 May 30 00:29:58.451: INFO: Selector matched 1 pods for map[app:agnhost] May 30 00:29:58.451: INFO: Found 1 / 1 May 30 00:29:58.451: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 30 00:29:58.454: INFO: Selector matched 1 pods for map[app:agnhost] May 30 00:29:58.454: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 30 00:29:58.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-sqb7b --namespace=kubectl-6101 -p {"metadata":{"annotations":{"x":"y"}}}' May 30 00:29:58.551: INFO: stderr: "" May 30 00:29:58.551: INFO: stdout: "pod/agnhost-master-sqb7b patched\n" STEP: checking annotations May 30 00:29:58.582: INFO: Selector matched 1 pods for map[app:agnhost] May 30 00:29:58.582: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:29:58.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6101" for this suite. • [SLOW TEST:6.600 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1468 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":140,"skipped":2212,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:29:58.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 30 00:29:58.670: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 30 00:29:58.681: INFO: Waiting for terminating namespaces to be deleted... May 30 00:29:58.683: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 30 00:29:58.688: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 30 00:29:58.688: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 30 00:29:58.688: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 30 00:29:58.688: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 30 00:29:58.688: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 30 00:29:58.688: INFO: Container kindnet-cni ready: true, restart count 2 May 30 00:29:58.688: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 30 00:29:58.688: INFO: Container kube-proxy ready: true, restart count 0 May 30 00:29:58.688: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 30 00:29:58.693: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 30 00:29:58.693: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 30 00:29:58.693: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 30 00:29:58.693: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 30 00:29:58.693: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 30 00:29:58.693: INFO: Container kindnet-cni ready: true, restart count 2 May 30 00:29:58.693: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 30 00:29:58.693: INFO: Container kube-proxy ready: true, restart count 0 May 30 00:29:58.693: INFO: agnhost-master-sqb7b from kubectl-6101 started at 2020-05-30 00:29:54 +0000 UTC (1 container statuses recorded) May 30 00:29:58.693: INFO: Container agnhost-master ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f0a75062-54a2-45d5-9019-a5262f5d5ffb 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-f0a75062-54a2-45d5-9019-a5262f5d5ffb off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-f0a75062-54a2-45d5-9019-a5262f5d5ffb [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:30:07.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5251" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.986 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":141,"skipped":2216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:30:07.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-e9c97a82-1cc2-4839-b46b-6a5e87dedcf8 STEP: Creating a pod to test consume configMaps May 30 00:30:07.720: INFO: Waiting up to 5m0s for pod "pod-configmaps-7e313df9-4bb1-414c-8c67-c53c40d43d38" in namespace "configmap-3109" to be "Succeeded or Failed" May 30 00:30:07.756: INFO: Pod "pod-configmaps-7e313df9-4bb1-414c-8c67-c53c40d43d38": Phase="Pending", Reason="", readiness=false. Elapsed: 35.319992ms May 30 00:30:09.774: INFO: Pod "pod-configmaps-7e313df9-4bb1-414c-8c67-c53c40d43d38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053645176s May 30 00:30:11.778: INFO: Pod "pod-configmaps-7e313df9-4bb1-414c-8c67-c53c40d43d38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057770705s STEP: Saw pod success May 30 00:30:11.778: INFO: Pod "pod-configmaps-7e313df9-4bb1-414c-8c67-c53c40d43d38" satisfied condition "Succeeded or Failed" May 30 00:30:11.781: INFO: Trying to get logs from node latest-worker pod pod-configmaps-7e313df9-4bb1-414c-8c67-c53c40d43d38 container configmap-volume-test: STEP: delete the pod May 30 00:30:11.819: INFO: Waiting for pod pod-configmaps-7e313df9-4bb1-414c-8c67-c53c40d43d38 to disappear May 30 00:30:11.835: INFO: Pod pod-configmaps-7e313df9-4bb1-414c-8c67-c53c40d43d38 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:30:11.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3109" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":142,"skipped":2246,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:30:11.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 30 00:30:11.914: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:30:19.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4125" for this suite. • [SLOW TEST:8.210 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":143,"skipped":2256,"failed":0} SSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:30:20.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 30 00:30:26.628: INFO: Successfully updated pod "adopt-release-bgkf7" STEP: Checking that the Job readopts the Pod May 30 00:30:26.628: INFO: Waiting up to 15m0s for pod "adopt-release-bgkf7" in namespace "job-3314" to be "adopted" May 30 00:30:26.651: INFO: Pod "adopt-release-bgkf7": Phase="Running", Reason="", readiness=true. Elapsed: 23.508083ms May 30 00:30:28.655: INFO: Pod "adopt-release-bgkf7": Phase="Running", Reason="", readiness=true. Elapsed: 2.02696168s May 30 00:30:28.655: INFO: Pod "adopt-release-bgkf7" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 30 00:30:29.166: INFO: Successfully updated pod "adopt-release-bgkf7" STEP: Checking that the Job releases the Pod May 30 00:30:29.166: INFO: Waiting up to 15m0s for pod "adopt-release-bgkf7" in namespace "job-3314" to be "released" May 30 00:30:29.186: INFO: Pod "adopt-release-bgkf7": Phase="Running", Reason="", readiness=true. Elapsed: 19.347364ms May 30 00:30:31.190: INFO: Pod "adopt-release-bgkf7": Phase="Running", Reason="", readiness=true. Elapsed: 2.023962158s May 30 00:30:31.190: INFO: Pod "adopt-release-bgkf7" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:30:31.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3314" for this suite. • [SLOW TEST:11.142 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":144,"skipped":2259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:30:31.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-09c04792-8200-4b6a-9f9d-1e1d84c1bf96 STEP: Creating a pod to test consume configMaps May 30 00:30:31.834: INFO: Waiting up to 5m0s for pod "pod-configmaps-93eea947-bc8c-4e1c-84ba-2a464bbcc9cb" in namespace "configmap-5848" to be "Succeeded or Failed" May 30 00:30:31.843: INFO: Pod "pod-configmaps-93eea947-bc8c-4e1c-84ba-2a464bbcc9cb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.006097ms May 30 00:30:33.847: INFO: Pod "pod-configmaps-93eea947-bc8c-4e1c-84ba-2a464bbcc9cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013004805s May 30 00:30:35.851: INFO: Pod "pod-configmaps-93eea947-bc8c-4e1c-84ba-2a464bbcc9cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01707809s STEP: Saw pod success May 30 00:30:35.851: INFO: Pod "pod-configmaps-93eea947-bc8c-4e1c-84ba-2a464bbcc9cb" satisfied condition "Succeeded or Failed" May 30 00:30:35.854: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-93eea947-bc8c-4e1c-84ba-2a464bbcc9cb container configmap-volume-test: STEP: delete the pod May 30 00:30:35.902: INFO: Waiting for pod pod-configmaps-93eea947-bc8c-4e1c-84ba-2a464bbcc9cb to disappear May 30 00:30:35.919: INFO: Pod pod-configmaps-93eea947-bc8c-4e1c-84ba-2a464bbcc9cb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:30:35.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5848" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":145,"skipped":2329,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:30:35.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-b5d4bb5e-2ca3-4924-91b4-61522bfaec01 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:30:40.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9408" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":146,"skipped":2337,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:30:40.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-812a5824-657a-449b-8589-558a96f4650a STEP: Creating a pod to test consume secrets May 30 00:30:40.336: INFO: Waiting up to 5m0s for pod "pod-secrets-e9dea6da-990e-405d-ba6d-c1eea358b402" in namespace "secrets-7614" to be "Succeeded or Failed" May 30 00:30:40.339: INFO: Pod "pod-secrets-e9dea6da-990e-405d-ba6d-c1eea358b402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.914426ms May 30 00:30:42.398: INFO: Pod "pod-secrets-e9dea6da-990e-405d-ba6d-c1eea358b402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061997849s May 30 00:30:44.403: INFO: Pod "pod-secrets-e9dea6da-990e-405d-ba6d-c1eea358b402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067068338s STEP: Saw pod success May 30 00:30:44.403: INFO: Pod "pod-secrets-e9dea6da-990e-405d-ba6d-c1eea358b402" satisfied condition "Succeeded or Failed" May 30 00:30:44.406: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e9dea6da-990e-405d-ba6d-c1eea358b402 container secret-env-test: STEP: delete the pod May 30 00:30:44.434: INFO: Waiting for pod pod-secrets-e9dea6da-990e-405d-ba6d-c1eea358b402 to disappear May 30 00:30:44.454: INFO: Pod pod-secrets-e9dea6da-990e-405d-ba6d-c1eea358b402 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:30:44.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7614" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":147,"skipped":2347,"failed":0} ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:30:44.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 30 00:30:44.555: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:30:55.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9510" for this suite. • [SLOW TEST:10.907 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":148,"skipped":2347,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:30:55.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 30 00:30:55.479: INFO: Waiting up to 5m0s for pod "pod-06bffff8-9aec-4048-9a4b-12624c513401" in namespace "emptydir-8447" to be "Succeeded or Failed" May 30 00:30:55.504: INFO: Pod "pod-06bffff8-9aec-4048-9a4b-12624c513401": Phase="Pending", Reason="", readiness=false. Elapsed: 24.409017ms May 30 00:30:57.507: INFO: Pod "pod-06bffff8-9aec-4048-9a4b-12624c513401": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027634916s May 30 00:30:59.510: INFO: Pod "pod-06bffff8-9aec-4048-9a4b-12624c513401": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031284934s STEP: Saw pod success May 30 00:30:59.511: INFO: Pod "pod-06bffff8-9aec-4048-9a4b-12624c513401" satisfied condition "Succeeded or Failed" May 30 00:30:59.514: INFO: Trying to get logs from node latest-worker2 pod pod-06bffff8-9aec-4048-9a4b-12624c513401 container test-container: STEP: delete the pod May 30 00:30:59.650: INFO: Waiting for pod pod-06bffff8-9aec-4048-9a4b-12624c513401 to disappear May 30 00:30:59.654: INFO: Pod pod-06bffff8-9aec-4048-9a4b-12624c513401 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:30:59.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8447" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":149,"skipped":2352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:30:59.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7301 May 30 00:31:03.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7301 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 30 00:31:04.126: INFO: stderr: "I0530 00:31:03.913321 1627 log.go:172] (0xc000b5b6b0) (0xc000c72280) Create stream\nI0530 00:31:03.913389 1627 log.go:172] (0xc000b5b6b0) (0xc000c72280) Stream added, broadcasting: 1\nI0530 00:31:03.917636 1627 log.go:172] (0xc000b5b6b0) Reply frame received for 1\nI0530 00:31:03.917679 1627 log.go:172] (0xc000b5b6b0) (0xc000736aa0) Create stream\nI0530 00:31:03.917692 1627 log.go:172] (0xc000b5b6b0) (0xc000736aa0) Stream added, broadcasting: 3\nI0530 00:31:03.918697 1627 log.go:172] (0xc000b5b6b0) Reply frame received for 3\nI0530 00:31:03.918731 1627 log.go:172] (0xc000b5b6b0) (0xc00071a5a0) Create stream\nI0530 00:31:03.918743 1627 log.go:172] (0xc000b5b6b0) (0xc00071a5a0) Stream added, broadcasting: 5\nI0530 00:31:03.919812 1627 log.go:172] (0xc000b5b6b0) Reply frame received for 5\nI0530 00:31:04.012874 1627 log.go:172] (0xc000b5b6b0) Data frame received for 5\nI0530 00:31:04.012905 1627 log.go:172] (0xc00071a5a0) (5) Data frame handling\nI0530 00:31:04.012927 1627 log.go:172] (0xc00071a5a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0530 00:31:04.116421 1627 log.go:172] (0xc000b5b6b0) Data frame received for 5\nI0530 00:31:04.116456 1627 log.go:172] (0xc00071a5a0) (5) Data frame handling\nI0530 00:31:04.116489 1627 log.go:172] (0xc000b5b6b0) Data frame received for 3\nI0530 00:31:04.116505 1627 log.go:172] (0xc000736aa0) (3) Data frame handling\nI0530 00:31:04.116520 1627 log.go:172] (0xc000736aa0) (3) Data frame sent\nI0530 00:31:04.116951 1627 log.go:172] (0xc000b5b6b0) Data frame received for 3\nI0530 00:31:04.117028 1627 log.go:172] (0xc000736aa0) (3) Data frame handling\nI0530 00:31:04.119542 1627 log.go:172] (0xc000b5b6b0) Data frame received for 1\nI0530 00:31:04.119665 1627 log.go:172] (0xc000c72280) (1) Data frame handling\nI0530 00:31:04.119761 1627 log.go:172] (0xc000c72280) (1) Data frame sent\nI0530 00:31:04.119805 1627 log.go:172] (0xc000b5b6b0) (0xc000c72280) Stream removed, broadcasting: 1\nI0530 00:31:04.119849 1627 log.go:172] (0xc000b5b6b0) Go away received\nI0530 00:31:04.120141 1627 log.go:172] (0xc000b5b6b0) (0xc000c72280) Stream removed, broadcasting: 1\nI0530 00:31:04.120158 1627 log.go:172] (0xc000b5b6b0) (0xc000736aa0) Stream removed, broadcasting: 3\nI0530 00:31:04.120167 1627 log.go:172] (0xc000b5b6b0) (0xc00071a5a0) Stream removed, broadcasting: 5\n" May 30 00:31:04.126: INFO: stdout: "iptables" May 30 00:31:04.126: INFO: proxyMode: iptables May 30 00:31:04.131: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 30 00:31:04.157: INFO: Pod kube-proxy-mode-detector still exists May 30 00:31:06.158: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 30 00:31:06.161: INFO: Pod kube-proxy-mode-detector still exists May 30 00:31:08.158: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 30 00:31:08.163: INFO: Pod kube-proxy-mode-detector still exists May 30 00:31:10.157: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 30 00:31:10.160: INFO: Pod kube-proxy-mode-detector still exists May 30 00:31:12.157: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 30 00:31:12.161: INFO: Pod kube-proxy-mode-detector still exists May 30 00:31:14.157: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 30 00:31:14.162: INFO: Pod kube-proxy-mode-detector still exists May 30 00:31:16.157: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 30 00:31:16.162: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-7301 STEP: creating replication controller affinity-nodeport-timeout in namespace services-7301 I0530 00:31:16.230413 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-7301, replica count: 3 I0530 00:31:19.280834 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:31:22.281067 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 00:31:22.315: INFO: Creating new exec pod May 30 00:31:27.337: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7301 execpod-affinitylh8gd -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 30 00:31:27.605: INFO: stderr: "I0530 00:31:27.487119 1647 log.go:172] (0xc000a0d1e0) (0xc000720fa0) Create stream\nI0530 00:31:27.487174 1647 log.go:172] (0xc000a0d1e0) (0xc000720fa0) Stream added, broadcasting: 1\nI0530 00:31:27.491695 1647 log.go:172] (0xc000a0d1e0) Reply frame received for 1\nI0530 00:31:27.491743 1647 log.go:172] (0xc000a0d1e0) (0xc0007195e0) Create stream\nI0530 00:31:27.491760 1647 log.go:172] (0xc000a0d1e0) (0xc0007195e0) Stream added, broadcasting: 3\nI0530 00:31:27.492681 1647 log.go:172] (0xc000a0d1e0) Reply frame received for 3\nI0530 00:31:27.492706 1647 log.go:172] (0xc000a0d1e0) (0xc000706b40) Create stream\nI0530 00:31:27.492713 1647 log.go:172] (0xc000a0d1e0) (0xc000706b40) Stream added, broadcasting: 5\nI0530 00:31:27.493834 1647 log.go:172] (0xc000a0d1e0) Reply frame received for 5\nI0530 00:31:27.583471 1647 log.go:172] (0xc000a0d1e0) Data frame received for 5\nI0530 00:31:27.583501 1647 log.go:172] (0xc000706b40) (5) Data frame handling\nI0530 00:31:27.583520 1647 log.go:172] (0xc000706b40) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0530 00:31:27.597280 1647 log.go:172] (0xc000a0d1e0) Data frame received for 5\nI0530 00:31:27.597319 1647 log.go:172] (0xc000706b40) (5) Data frame handling\nI0530 00:31:27.597337 1647 log.go:172] (0xc000706b40) (5) Data frame sent\nI0530 00:31:27.597352 1647 log.go:172] (0xc000a0d1e0) Data frame received for 5\nI0530 00:31:27.597364 1647 log.go:172] (0xc000706b40) (5) Data frame handling\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0530 00:31:27.597386 1647 log.go:172] (0xc000a0d1e0) Data frame received for 3\nI0530 00:31:27.597423 1647 log.go:172] (0xc0007195e0) (3) Data frame handling\nI0530 00:31:27.599505 1647 log.go:172] (0xc000a0d1e0) Data frame received for 1\nI0530 00:31:27.599527 1647 log.go:172] (0xc000720fa0) (1) Data frame handling\nI0530 00:31:27.599539 1647 log.go:172] (0xc000720fa0) (1) Data frame sent\nI0530 00:31:27.599555 1647 log.go:172] (0xc000a0d1e0) (0xc000720fa0) Stream removed, broadcasting: 1\nI0530 00:31:27.599577 1647 log.go:172] (0xc000a0d1e0) Go away received\nI0530 00:31:27.600023 1647 log.go:172] (0xc000a0d1e0) (0xc000720fa0) Stream removed, broadcasting: 1\nI0530 00:31:27.600040 1647 log.go:172] (0xc000a0d1e0) (0xc0007195e0) Stream removed, broadcasting: 3\nI0530 00:31:27.600051 1647 log.go:172] (0xc000a0d1e0) (0xc000706b40) Stream removed, broadcasting: 5\n" May 30 00:31:27.605: INFO: stdout: "" May 30 00:31:27.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7301 execpod-affinitylh8gd -- /bin/sh -x -c nc -zv -t -w 2 10.101.242.41 80' May 30 00:31:27.814: INFO: stderr: "I0530 00:31:27.726087 1670 log.go:172] (0xc000b2c000) (0xc0001395e0) Create stream\nI0530 00:31:27.726157 1670 log.go:172] (0xc000b2c000) (0xc0001395e0) Stream added, broadcasting: 1\nI0530 00:31:27.728148 1670 log.go:172] (0xc000b2c000) Reply frame received for 1\nI0530 00:31:27.728177 1670 log.go:172] (0xc000b2c000) (0xc000139b80) Create stream\nI0530 00:31:27.728186 1670 log.go:172] (0xc000b2c000) (0xc000139b80) Stream added, broadcasting: 3\nI0530 00:31:27.729289 1670 log.go:172] (0xc000b2c000) Reply frame received for 3\nI0530 00:31:27.729311 1670 log.go:172] (0xc000b2c000) (0xc0003ce820) Create stream\nI0530 00:31:27.729323 1670 log.go:172] (0xc000b2c000) (0xc0003ce820) Stream added, broadcasting: 5\nI0530 00:31:27.730292 1670 log.go:172] (0xc000b2c000) Reply frame received for 5\nI0530 00:31:27.805680 1670 log.go:172] (0xc000b2c000) Data frame received for 3\nI0530 00:31:27.805729 1670 log.go:172] (0xc000139b80) (3) Data frame handling\nI0530 00:31:27.805752 1670 log.go:172] (0xc000b2c000) Data frame received for 5\nI0530 00:31:27.805760 1670 log.go:172] (0xc0003ce820) (5) Data frame handling\nI0530 00:31:27.805771 1670 log.go:172] (0xc0003ce820) (5) Data frame sent\nI0530 00:31:27.805778 1670 log.go:172] (0xc000b2c000) Data frame received for 5\nI0530 00:31:27.805784 1670 log.go:172] (0xc0003ce820) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.242.41 80\nConnection to 10.101.242.41 80 port [tcp/http] succeeded!\nI0530 00:31:27.807912 1670 log.go:172] (0xc000b2c000) Data frame received for 1\nI0530 00:31:27.808006 1670 log.go:172] (0xc0001395e0) (1) Data frame handling\nI0530 00:31:27.808037 1670 log.go:172] (0xc0001395e0) (1) Data frame sent\nI0530 00:31:27.808062 1670 log.go:172] (0xc000b2c000) (0xc0001395e0) Stream removed, broadcasting: 1\nI0530 00:31:27.808089 1670 log.go:172] (0xc000b2c000) Go away received\nI0530 00:31:27.808731 1670 log.go:172] (0xc000b2c000) (0xc0001395e0) Stream removed, broadcasting: 1\nI0530 00:31:27.808769 1670 log.go:172] (0xc000b2c000) (0xc000139b80) Stream removed, broadcasting: 3\nI0530 00:31:27.808797 1670 log.go:172] (0xc000b2c000) (0xc0003ce820) Stream removed, broadcasting: 5\n" May 30 00:31:27.814: INFO: stdout: "" May 30 00:31:27.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7301 execpod-affinitylh8gd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32444' May 30 00:31:28.001: INFO: stderr: "I0530 00:31:27.936269 1690 log.go:172] (0xc0009a69a0) (0xc000aec280) Create stream\nI0530 00:31:27.936324 1690 log.go:172] (0xc0009a69a0) (0xc000aec280) Stream added, broadcasting: 1\nI0530 00:31:27.940592 1690 log.go:172] (0xc0009a69a0) Reply frame received for 1\nI0530 00:31:27.941248 1690 log.go:172] (0xc0009a69a0) (0xc00058a5a0) Create stream\nI0530 00:31:27.941273 1690 log.go:172] (0xc0009a69a0) (0xc00058a5a0) Stream added, broadcasting: 3\nI0530 00:31:27.942229 1690 log.go:172] (0xc0009a69a0) Reply frame received for 3\nI0530 00:31:27.942276 1690 log.go:172] (0xc0009a69a0) (0xc00058b9a0) Create stream\nI0530 00:31:27.942299 1690 log.go:172] (0xc0009a69a0) (0xc00058b9a0) Stream added, broadcasting: 5\nI0530 00:31:27.943124 1690 log.go:172] (0xc0009a69a0) Reply frame received for 5\nI0530 00:31:27.992679 1690 log.go:172] (0xc0009a69a0) Data frame received for 5\nI0530 00:31:27.992714 1690 log.go:172] (0xc00058b9a0) (5) Data frame handling\nI0530 00:31:27.992729 1690 log.go:172] (0xc00058b9a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32444\nConnection to 172.17.0.13 32444 port [tcp/32444] succeeded!\nI0530 00:31:27.992902 1690 log.go:172] (0xc0009a69a0) Data frame received for 3\nI0530 00:31:27.992934 1690 log.go:172] (0xc00058a5a0) (3) Data frame handling\nI0530 00:31:27.993053 1690 log.go:172] (0xc0009a69a0) Data frame received for 5\nI0530 00:31:27.993074 1690 log.go:172] (0xc00058b9a0) (5) Data frame handling\nI0530 00:31:27.995088 1690 log.go:172] (0xc0009a69a0) Data frame received for 1\nI0530 00:31:27.995162 1690 log.go:172] (0xc000aec280) (1) Data frame handling\nI0530 00:31:27.995201 1690 log.go:172] (0xc000aec280) (1) Data frame sent\nI0530 00:31:27.995223 1690 log.go:172] (0xc0009a69a0) (0xc000aec280) Stream removed, broadcasting: 1\nI0530 00:31:27.995236 1690 log.go:172] (0xc0009a69a0) Go away received\nI0530 00:31:27.995677 1690 log.go:172] (0xc0009a69a0) (0xc000aec280) Stream removed, broadcasting: 1\nI0530 00:31:27.995715 1690 log.go:172] (0xc0009a69a0) (0xc00058a5a0) Stream removed, broadcasting: 3\nI0530 00:31:27.995740 1690 log.go:172] (0xc0009a69a0) (0xc00058b9a0) Stream removed, broadcasting: 5\n" May 30 00:31:28.001: INFO: stdout: "" May 30 00:31:28.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7301 execpod-affinitylh8gd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32444' May 30 00:31:28.197: INFO: stderr: "I0530 00:31:28.122560 1711 log.go:172] (0xc000b3b1e0) (0xc00054edc0) Create stream\nI0530 00:31:28.122625 1711 log.go:172] (0xc000b3b1e0) (0xc00054edc0) Stream added, broadcasting: 1\nI0530 00:31:28.127613 1711 log.go:172] (0xc000b3b1e0) Reply frame received for 1\nI0530 00:31:28.127659 1711 log.go:172] (0xc000b3b1e0) (0xc000546640) Create stream\nI0530 00:31:28.127672 1711 log.go:172] (0xc000b3b1e0) (0xc000546640) Stream added, broadcasting: 3\nI0530 00:31:28.128508 1711 log.go:172] (0xc000b3b1e0) Reply frame received for 3\nI0530 00:31:28.128542 1711 log.go:172] (0xc000b3b1e0) (0xc000432e60) Create stream\nI0530 00:31:28.128558 1711 log.go:172] (0xc000b3b1e0) (0xc000432e60) Stream added, broadcasting: 5\nI0530 00:31:28.129759 1711 log.go:172] (0xc000b3b1e0) Reply frame received for 5\nI0530 00:31:28.191223 1711 log.go:172] (0xc000b3b1e0) Data frame received for 3\nI0530 00:31:28.191269 1711 log.go:172] (0xc000546640) (3) Data frame handling\nI0530 00:31:28.191303 1711 log.go:172] (0xc000b3b1e0) Data frame received for 5\nI0530 00:31:28.191325 1711 log.go:172] (0xc000432e60) (5) Data frame handling\nI0530 00:31:28.191346 1711 log.go:172] (0xc000432e60) (5) Data frame sent\nI0530 00:31:28.191361 1711 log.go:172] (0xc000b3b1e0) Data frame received for 5\nI0530 00:31:28.191370 1711 log.go:172] (0xc000432e60) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32444\nConnection to 172.17.0.12 32444 port [tcp/32444] succeeded!\nI0530 00:31:28.192387 1711 log.go:172] (0xc000b3b1e0) Data frame received for 1\nI0530 00:31:28.192429 1711 log.go:172] (0xc00054edc0) (1) Data frame handling\nI0530 00:31:28.192452 1711 log.go:172] (0xc00054edc0) (1) Data frame sent\nI0530 00:31:28.192481 1711 log.go:172] (0xc000b3b1e0) (0xc00054edc0) Stream removed, broadcasting: 1\nI0530 00:31:28.192506 1711 log.go:172] (0xc000b3b1e0) Go away received\nI0530 00:31:28.192964 1711 log.go:172] (0xc000b3b1e0) (0xc00054edc0) Stream removed, broadcasting: 1\nI0530 00:31:28.192993 1711 log.go:172] (0xc000b3b1e0) (0xc000546640) Stream removed, broadcasting: 3\nI0530 00:31:28.193004 1711 log.go:172] (0xc000b3b1e0) (0xc000432e60) Stream removed, broadcasting: 5\n" May 30 00:31:28.198: INFO: stdout: "" May 30 00:31:28.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7301 execpod-affinitylh8gd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32444/ ; done' May 30 00:31:28.579: INFO: stderr: "I0530 00:31:28.338768 1734 log.go:172] (0xc00003a2c0) (0xc0003d0960) Create stream\nI0530 00:31:28.338844 1734 log.go:172] (0xc00003a2c0) (0xc0003d0960) Stream added, broadcasting: 1\nI0530 00:31:28.340511 1734 log.go:172] (0xc00003a2c0) Reply frame received for 1\nI0530 00:31:28.340556 1734 log.go:172] (0xc00003a2c0) (0xc00035a1e0) Create stream\nI0530 00:31:28.340572 1734 log.go:172] (0xc00003a2c0) (0xc00035a1e0) Stream added, broadcasting: 3\nI0530 00:31:28.341769 1734 log.go:172] (0xc00003a2c0) Reply frame received for 3\nI0530 00:31:28.341800 1734 log.go:172] (0xc00003a2c0) (0xc0007520a0) Create stream\nI0530 00:31:28.341812 1734 log.go:172] (0xc00003a2c0) (0xc0007520a0) Stream added, broadcasting: 5\nI0530 00:31:28.342630 1734 log.go:172] (0xc00003a2c0) Reply frame received for 5\nI0530 00:31:28.405004 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.405025 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.405037 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\n+ seq 0 15\nI0530 00:31:28.426890 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.426909 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.426919 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.426930 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.426943 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.426956 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.426964 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.426968 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.426979 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.491640 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.491681 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.491705 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.492324 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.492340 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.492349 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\n+ I0530 00:31:28.492412 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.492450 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.492492 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\necho\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.492515 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.492528 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.492547 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.501433 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.501454 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.501475 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.502440 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.502458 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.502469 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.502493 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.502507 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.502516 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.510309 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.510331 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.510351 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.511303 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.511318 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.511341 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.511542 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.511557 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.511570 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.515348 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.515363 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.515378 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.516048 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.516084 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.516101 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.516119 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.516129 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.516145 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.519646 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.519679 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.519698 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0530 00:31:28.519722 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.519738 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.519762 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.519776 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.519793 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.519808 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.519821 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.519835 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\n 2 http://172.17.0.13:32444/\nI0530 00:31:28.519865 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.524572 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.524599 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.524609 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.525016 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.525034 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.525043 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.525055 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.525067 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.525074 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.525083 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.525089 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.525285 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.528757 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.528780 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.528797 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.529647 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.529671 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.529686 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.529714 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.529738 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.529752 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.529761 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.529768 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.529780 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.532826 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.532843 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.532858 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.533612 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.533635 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.533649 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.533666 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.533673 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.533682 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.533688 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.533700 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.533712 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.536774 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.536793 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.536812 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.537623 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.537636 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.537648 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.537662 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.537668 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.537674 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.537682 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.537689 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.537700 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.541321 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.541333 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.541340 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.541993 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.542007 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.542015 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.542031 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.542042 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.542054 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.545787 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.545813 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.545834 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.546175 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.546204 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.546228 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.546288 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.546306 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.546323 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.551598 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.551609 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.551615 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.552054 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.552070 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.552077 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.552086 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.552090 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.552095 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.555891 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.555902 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.555907 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.556385 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.556437 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.556455 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.556487 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.556501 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.556514 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.556525 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.556534 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.556584 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.560162 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.560182 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.560201 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.560602 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.560622 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.560646 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.560669 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.560680 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.560690 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.560710 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0530 00:31:28.560734 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.560760 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.565345 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.565364 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.565385 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.565822 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.565847 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.565863 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.565880 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.565907 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.565923 1734 log.go:172] (0xc0007520a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.569980 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.570003 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.570021 1734 log.go:172] (0xc00035a1e0) (3) Data frame sent\nI0530 00:31:28.570659 1734 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0530 00:31:28.570674 1734 log.go:172] (0xc00035a1e0) (3) Data frame handling\nI0530 00:31:28.570768 1734 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0530 00:31:28.570787 1734 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0530 00:31:28.572244 1734 log.go:172] (0xc00003a2c0) Data frame received for 1\nI0530 00:31:28.572259 1734 log.go:172] (0xc0003d0960) (1) Data frame handling\nI0530 00:31:28.572287 1734 log.go:172] (0xc0003d0960) (1) Data frame sent\nI0530 00:31:28.572303 1734 log.go:172] (0xc00003a2c0) (0xc0003d0960) Stream removed, broadcasting: 1\nI0530 00:31:28.572318 1734 log.go:172] (0xc00003a2c0) Go away received\nI0530 00:31:28.572792 1734 log.go:172] (0xc00003a2c0) (0xc0003d0960) Stream removed, broadcasting: 1\nI0530 00:31:28.572813 1734 log.go:172] (0xc00003a2c0) (0xc00035a1e0) Stream removed, broadcasting: 3\nI0530 00:31:28.572823 1734 log.go:172] (0xc00003a2c0) (0xc0007520a0) Stream removed, broadcasting: 5\n" May 30 00:31:28.580: INFO: stdout: "\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv\naffinity-nodeport-timeout-tqvgv" May 30 00:31:28.580: INFO: Received response from host: May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Received response from host: affinity-nodeport-timeout-tqvgv May 30 00:31:28.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7301 execpod-affinitylh8gd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:32444/' May 30 00:31:28.790: INFO: stderr: "I0530 00:31:28.701578 1756 log.go:172] (0xc000453d90) (0xc000b08460) Create stream\nI0530 00:31:28.701662 1756 log.go:172] (0xc000453d90) (0xc000b08460) Stream added, broadcasting: 1\nI0530 00:31:28.706167 1756 log.go:172] (0xc000453d90) Reply frame received for 1\nI0530 00:31:28.706216 1756 log.go:172] (0xc000453d90) (0xc0005f6780) Create stream\nI0530 00:31:28.706229 1756 log.go:172] (0xc000453d90) (0xc0005f6780) Stream added, broadcasting: 3\nI0530 00:31:28.707169 1756 log.go:172] (0xc000453d90) Reply frame received for 3\nI0530 00:31:28.707212 1756 log.go:172] (0xc000453d90) (0xc0005f70e0) Create stream\nI0530 00:31:28.707222 1756 log.go:172] (0xc000453d90) (0xc0005f70e0) Stream added, broadcasting: 5\nI0530 00:31:28.708080 1756 log.go:172] (0xc000453d90) Reply frame received for 5\nI0530 00:31:28.779730 1756 log.go:172] (0xc000453d90) Data frame received for 5\nI0530 00:31:28.779763 1756 log.go:172] (0xc0005f70e0) (5) Data frame handling\nI0530 00:31:28.779884 1756 log.go:172] (0xc0005f70e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:28.781928 1756 log.go:172] (0xc000453d90) Data frame received for 3\nI0530 00:31:28.781951 1756 log.go:172] (0xc0005f6780) (3) Data frame handling\nI0530 00:31:28.781978 1756 log.go:172] (0xc0005f6780) (3) Data frame sent\nI0530 00:31:28.782514 1756 log.go:172] (0xc000453d90) Data frame received for 3\nI0530 00:31:28.782540 1756 log.go:172] (0xc0005f6780) (3) Data frame handling\nI0530 00:31:28.782725 1756 log.go:172] (0xc000453d90) Data frame received for 5\nI0530 00:31:28.782754 1756 log.go:172] (0xc0005f70e0) (5) Data frame handling\nI0530 00:31:28.784465 1756 log.go:172] (0xc000453d90) Data frame received for 1\nI0530 00:31:28.784500 1756 log.go:172] (0xc000b08460) (1) Data frame handling\nI0530 00:31:28.784522 1756 log.go:172] (0xc000b08460) (1) Data frame sent\nI0530 00:31:28.784545 1756 log.go:172] (0xc000453d90) (0xc000b08460) Stream removed, broadcasting: 1\nI0530 00:31:28.784580 1756 log.go:172] (0xc000453d90) Go away received\nI0530 00:31:28.784907 1756 log.go:172] (0xc000453d90) (0xc000b08460) Stream removed, broadcasting: 1\nI0530 00:31:28.784926 1756 log.go:172] (0xc000453d90) (0xc0005f6780) Stream removed, broadcasting: 3\nI0530 00:31:28.784933 1756 log.go:172] (0xc000453d90) (0xc0005f70e0) Stream removed, broadcasting: 5\n" May 30 00:31:28.790: INFO: stdout: "affinity-nodeport-timeout-tqvgv" May 30 00:31:43.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7301 execpod-affinitylh8gd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:32444/' May 30 00:31:44.023: INFO: stderr: "I0530 00:31:43.930196 1776 log.go:172] (0xc00003a790) (0xc000414d20) Create stream\nI0530 00:31:43.930253 1776 log.go:172] (0xc00003a790) (0xc000414d20) Stream added, broadcasting: 1\nI0530 00:31:43.933705 1776 log.go:172] (0xc00003a790) Reply frame received for 1\nI0530 00:31:43.933751 1776 log.go:172] (0xc00003a790) (0xc00015df40) Create stream\nI0530 00:31:43.933771 1776 log.go:172] (0xc00003a790) (0xc00015df40) Stream added, broadcasting: 3\nI0530 00:31:43.934755 1776 log.go:172] (0xc00003a790) Reply frame received for 3\nI0530 00:31:43.934784 1776 log.go:172] (0xc00003a790) (0xc000489040) Create stream\nI0530 00:31:43.934797 1776 log.go:172] (0xc00003a790) (0xc000489040) Stream added, broadcasting: 5\nI0530 00:31:43.935885 1776 log.go:172] (0xc00003a790) Reply frame received for 5\nI0530 00:31:44.008128 1776 log.go:172] (0xc00003a790) Data frame received for 5\nI0530 00:31:44.008154 1776 log.go:172] (0xc000489040) (5) Data frame handling\nI0530 00:31:44.008171 1776 log.go:172] (0xc000489040) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32444/\nI0530 00:31:44.014431 1776 log.go:172] (0xc00003a790) Data frame received for 3\nI0530 00:31:44.014452 1776 log.go:172] (0xc00015df40) (3) Data frame handling\nI0530 00:31:44.014588 1776 log.go:172] (0xc00015df40) (3) Data frame sent\nI0530 00:31:44.015322 1776 log.go:172] (0xc00003a790) Data frame received for 5\nI0530 00:31:44.015355 1776 log.go:172] (0xc000489040) (5) Data frame handling\nI0530 00:31:44.015394 1776 log.go:172] (0xc00003a790) Data frame received for 3\nI0530 00:31:44.015435 1776 log.go:172] (0xc00015df40) (3) Data frame handling\nI0530 00:31:44.017338 1776 log.go:172] (0xc00003a790) Data frame received for 1\nI0530 00:31:44.017369 1776 log.go:172] (0xc000414d20) (1) Data frame handling\nI0530 00:31:44.017391 1776 log.go:172] (0xc000414d20) (1) Data frame sent\nI0530 00:31:44.017406 1776 log.go:172] (0xc00003a790) (0xc000414d20) Stream removed, broadcasting: 1\nI0530 00:31:44.017559 1776 log.go:172] (0xc00003a790) Go away received\nI0530 00:31:44.017684 1776 log.go:172] (0xc00003a790) (0xc000414d20) Stream removed, broadcasting: 1\nI0530 00:31:44.017702 1776 log.go:172] (0xc00003a790) (0xc00015df40) Stream removed, broadcasting: 3\nI0530 00:31:44.017714 1776 log.go:172] (0xc00003a790) (0xc000489040) Stream removed, broadcasting: 5\n" May 30 00:31:44.023: INFO: stdout: "affinity-nodeport-timeout-rvrtl" May 30 00:31:44.023: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-7301, will wait for the garbage collector to delete the pods May 30 00:31:44.163: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.439473ms May 30 00:31:44.664: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.276072ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:31:49.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7301" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:50.184 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":150,"skipped":2383,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:31:49.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:31:49.993: INFO: Waiting up to 5m0s for pod "downwardapi-volume-267d5991-93be-4131-8576-2bc5fde4d3a1" in namespace "downward-api-412" to be "Succeeded or Failed" May 30 00:31:50.007: INFO: Pod "downwardapi-volume-267d5991-93be-4131-8576-2bc5fde4d3a1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.673003ms May 30 00:31:52.011: INFO: Pod "downwardapi-volume-267d5991-93be-4131-8576-2bc5fde4d3a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01831185s May 30 00:31:54.015: INFO: Pod "downwardapi-volume-267d5991-93be-4131-8576-2bc5fde4d3a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022469309s STEP: Saw pod success May 30 00:31:54.015: INFO: Pod "downwardapi-volume-267d5991-93be-4131-8576-2bc5fde4d3a1" satisfied condition "Succeeded or Failed" May 30 00:31:54.019: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-267d5991-93be-4131-8576-2bc5fde4d3a1 container client-container: STEP: delete the pod May 30 00:31:54.094: INFO: Waiting for pod downwardapi-volume-267d5991-93be-4131-8576-2bc5fde4d3a1 to disappear May 30 00:31:54.100: INFO: Pod downwardapi-volume-267d5991-93be-4131-8576-2bc5fde4d3a1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:31:54.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-412" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":151,"skipped":2387,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:31:54.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:31:58.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2260" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":152,"skipped":2400,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:31:58.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:31:58.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4205" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":153,"skipped":2428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:31:58.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 30 00:31:58.461: INFO: Waiting up to 5m0s for pod "downward-api-65246b98-0ebc-4ca3-b8fd-c5de837998b2" in namespace "downward-api-839" to be "Succeeded or Failed" May 30 00:31:58.476: INFO: Pod "downward-api-65246b98-0ebc-4ca3-b8fd-c5de837998b2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.606504ms May 30 00:32:00.482: INFO: Pod "downward-api-65246b98-0ebc-4ca3-b8fd-c5de837998b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02022745s May 30 00:32:02.486: INFO: Pod "downward-api-65246b98-0ebc-4ca3-b8fd-c5de837998b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024877762s STEP: Saw pod success May 30 00:32:02.486: INFO: Pod "downward-api-65246b98-0ebc-4ca3-b8fd-c5de837998b2" satisfied condition "Succeeded or Failed" May 30 00:32:02.489: INFO: Trying to get logs from node latest-worker2 pod downward-api-65246b98-0ebc-4ca3-b8fd-c5de837998b2 container dapi-container: STEP: delete the pod May 30 00:32:02.596: INFO: Waiting for pod downward-api-65246b98-0ebc-4ca3-b8fd-c5de837998b2 to disappear May 30 00:32:02.624: INFO: Pod downward-api-65246b98-0ebc-4ca3-b8fd-c5de837998b2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:32:02.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-839" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":154,"skipped":2459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:32:02.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9964 STEP: creating service affinity-clusterip in namespace services-9964 STEP: creating replication controller affinity-clusterip in namespace services-9964 I0530 00:32:02.876320 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-9964, replica count: 3 I0530 00:32:05.926851 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:32:08.927077 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 00:32:08.933: INFO: Creating new exec pod May 30 00:32:13.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9964 execpod-affinitytvbbn -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 30 00:32:14.202: INFO: stderr: "I0530 00:32:14.092862 1799 log.go:172] (0xc0009c5340) (0xc00066c5a0) Create stream\nI0530 00:32:14.092923 1799 log.go:172] (0xc0009c5340) (0xc00066c5a0) Stream added, broadcasting: 1\nI0530 00:32:14.097624 1799 log.go:172] (0xc0009c5340) Reply frame received for 1\nI0530 00:32:14.097657 1799 log.go:172] (0xc0009c5340) (0xc0006214a0) Create stream\nI0530 00:32:14.097666 1799 log.go:172] (0xc0009c5340) (0xc0006214a0) Stream added, broadcasting: 3\nI0530 00:32:14.098660 1799 log.go:172] (0xc0009c5340) Reply frame received for 3\nI0530 00:32:14.098703 1799 log.go:172] (0xc0009c5340) (0xc0005721e0) Create stream\nI0530 00:32:14.098721 1799 log.go:172] (0xc0009c5340) (0xc0005721e0) Stream added, broadcasting: 5\nI0530 00:32:14.099579 1799 log.go:172] (0xc0009c5340) Reply frame received for 5\nI0530 00:32:14.194683 1799 log.go:172] (0xc0009c5340) Data frame received for 5\nI0530 00:32:14.194712 1799 log.go:172] (0xc0005721e0) (5) Data frame handling\nI0530 00:32:14.194730 1799 log.go:172] (0xc0005721e0) (5) Data frame sent\nI0530 00:32:14.194738 1799 log.go:172] (0xc0009c5340) Data frame received for 5\nI0530 00:32:14.194745 1799 log.go:172] (0xc0005721e0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0530 00:32:14.194773 1799 log.go:172] (0xc0005721e0) (5) Data frame sent\nI0530 00:32:14.194868 1799 log.go:172] (0xc0009c5340) Data frame received for 5\nI0530 00:32:14.194961 1799 log.go:172] (0xc0005721e0) (5) Data frame handling\nI0530 00:32:14.195035 1799 log.go:172] (0xc0009c5340) Data frame received for 3\nI0530 00:32:14.195140 1799 log.go:172] (0xc0006214a0) (3) Data frame handling\nI0530 00:32:14.196593 1799 log.go:172] (0xc0009c5340) Data frame received for 1\nI0530 00:32:14.196615 1799 log.go:172] (0xc00066c5a0) (1) Data frame handling\nI0530 00:32:14.196635 1799 log.go:172] (0xc00066c5a0) (1) Data frame sent\nI0530 00:32:14.196648 1799 log.go:172] (0xc0009c5340) (0xc00066c5a0) Stream removed, broadcasting: 1\nI0530 00:32:14.196779 1799 log.go:172] (0xc0009c5340) Go away received\nI0530 00:32:14.196959 1799 log.go:172] (0xc0009c5340) (0xc00066c5a0) Stream removed, broadcasting: 1\nI0530 00:32:14.196974 1799 log.go:172] (0xc0009c5340) (0xc0006214a0) Stream removed, broadcasting: 3\nI0530 00:32:14.196984 1799 log.go:172] (0xc0009c5340) (0xc0005721e0) Stream removed, broadcasting: 5\n" May 30 00:32:14.202: INFO: stdout: "" May 30 00:32:14.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9964 execpod-affinitytvbbn -- /bin/sh -x -c nc -zv -t -w 2 10.97.38.59 80' May 30 00:32:14.414: INFO: stderr: "I0530 00:32:14.338599 1819 log.go:172] (0xc0005b6fd0) (0xc000c086e0) Create stream\nI0530 00:32:14.338656 1819 log.go:172] (0xc0005b6fd0) (0xc000c086e0) Stream added, broadcasting: 1\nI0530 00:32:14.342902 1819 log.go:172] (0xc0005b6fd0) Reply frame received for 1\nI0530 00:32:14.342953 1819 log.go:172] (0xc0005b6fd0) (0xc0005ccd20) Create stream\nI0530 00:32:14.342981 1819 log.go:172] (0xc0005b6fd0) (0xc0005ccd20) Stream added, broadcasting: 3\nI0530 00:32:14.344080 1819 log.go:172] (0xc0005b6fd0) Reply frame received for 3\nI0530 00:32:14.344135 1819 log.go:172] (0xc0005b6fd0) (0xc000380f00) Create stream\nI0530 00:32:14.344163 1819 log.go:172] (0xc0005b6fd0) (0xc000380f00) Stream added, broadcasting: 5\nI0530 00:32:14.345014 1819 log.go:172] (0xc0005b6fd0) Reply frame received for 5\nI0530 00:32:14.408040 1819 log.go:172] (0xc0005b6fd0) Data frame received for 5\nI0530 00:32:14.408077 1819 log.go:172] (0xc000380f00) (5) Data frame handling\nI0530 00:32:14.408091 1819 log.go:172] (0xc000380f00) (5) Data frame sent\nI0530 00:32:14.408106 1819 log.go:172] (0xc0005b6fd0) Data frame received for 5\nI0530 00:32:14.408128 1819 log.go:172] (0xc000380f00) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.38.59 80\nConnection to 10.97.38.59 80 port [tcp/http] succeeded!\nI0530 00:32:14.408172 1819 log.go:172] (0xc0005b6fd0) Data frame received for 3\nI0530 00:32:14.408187 1819 log.go:172] (0xc0005ccd20) (3) Data frame handling\nI0530 00:32:14.409905 1819 log.go:172] (0xc0005b6fd0) Data frame received for 1\nI0530 00:32:14.409938 1819 log.go:172] (0xc000c086e0) (1) Data frame handling\nI0530 00:32:14.409961 1819 log.go:172] (0xc000c086e0) (1) Data frame sent\nI0530 00:32:14.409984 1819 log.go:172] (0xc0005b6fd0) (0xc000c086e0) Stream removed, broadcasting: 1\nI0530 00:32:14.410025 1819 log.go:172] (0xc0005b6fd0) Go away received\nI0530 00:32:14.410390 1819 log.go:172] (0xc0005b6fd0) (0xc000c086e0) Stream removed, broadcasting: 1\nI0530 00:32:14.410405 1819 log.go:172] (0xc0005b6fd0) (0xc0005ccd20) Stream removed, broadcasting: 3\nI0530 00:32:14.410413 1819 log.go:172] (0xc0005b6fd0) (0xc000380f00) Stream removed, broadcasting: 5\n" May 30 00:32:14.414: INFO: stdout: "" May 30 00:32:14.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9964 execpod-affinitytvbbn -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.97.38.59:80/ ; done' May 30 00:32:14.747: INFO: stderr: "I0530 00:32:14.562754 1841 log.go:172] (0xc0009b9340) (0xc00090e5a0) Create stream\nI0530 00:32:14.562837 1841 log.go:172] (0xc0009b9340) (0xc00090e5a0) Stream added, broadcasting: 1\nI0530 00:32:14.568740 1841 log.go:172] (0xc0009b9340) Reply frame received for 1\nI0530 00:32:14.568778 1841 log.go:172] (0xc0009b9340) (0xc000516dc0) Create stream\nI0530 00:32:14.568789 1841 log.go:172] (0xc0009b9340) (0xc000516dc0) Stream added, broadcasting: 3\nI0530 00:32:14.570046 1841 log.go:172] (0xc0009b9340) Reply frame received for 3\nI0530 00:32:14.570090 1841 log.go:172] (0xc0009b9340) (0xc00024c280) Create stream\nI0530 00:32:14.570101 1841 log.go:172] (0xc0009b9340) (0xc00024c280) Stream added, broadcasting: 5\nI0530 00:32:14.570771 1841 log.go:172] (0xc0009b9340) Reply frame received for 5\nI0530 00:32:14.636032 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.636089 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.636140 1841 log.go:172] (0xc00024c280) (5) Data frame sent\n+ seq 0 15\nI0530 00:32:14.645900 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.645933 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.645944 1841 log.go:172] (0xc00024c280) (5) Data frame sent\nI0530 00:32:14.645951 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.645960 1841 log.go:172] (0xc00024c280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.645979 1841 log.go:172] (0xc00024c280) (5) Data frame sent\nI0530 00:32:14.645991 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.645999 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.646013 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.651307 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.651341 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.651367 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.651921 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.651943 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.651955 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.651973 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.651983 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.651993 1841 log.go:172] (0xc00024c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.658915 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.658931 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.658947 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.659520 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.659532 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.659539 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.659567 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.659599 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.659619 1841 log.go:172] (0xc00024c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.664499 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.664512 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.664519 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.665061 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.665106 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.665317 1841 log.go:172] (0xc00024c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.665334 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.665344 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.665352 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.672076 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.672098 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.672116 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.672596 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.672619 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.672642 1841 log.go:172] (0xc00024c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.672663 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.672683 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.672710 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.677962 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.677998 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.678036 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.678487 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.678560 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.678580 1841 log.go:172] (0xc00024c280) (5) Data frame sent\nI0530 00:32:14.678591 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.678599 1841 log.go:172] (0xc00024c280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.678631 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.678739 1841 log.go:172] (0xc00024c280) (5) Data frame sent\nI0530 00:32:14.678787 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.678822 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.682643 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.682657 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.682664 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.687247 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.687270 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.687280 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.687299 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.687307 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.687316 1841 log.go:172] (0xc00024c280) (5) Data frame sent\nI0530 00:32:14.687324 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.687331 1841 log.go:172] (0xc00024c280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.687356 1841 log.go:172] (0xc00024c280) (5) Data frame sent\nI0530 00:32:14.692841 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.692855 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.692870 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.693626 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.693645 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.693655 1841 log.go:172] (0xc00024c280) (5) Data frame sent\nI0530 00:32:14.693665 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.693669 1841 log.go:172] (0xc00024c280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.693682 1841 log.go:172] (0xc00024c280) (5) Data frame sent\nI0530 00:32:14.693696 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.693718 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.693739 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.697539 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.697549 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.697559 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.698277 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.698295 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.698302 1841 log.go:172] (0xc00024c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.698310 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.698319 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.698324 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.702115 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.702128 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.702134 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.702645 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.702655 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.702662 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.702673 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.702680 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.702685 1841 log.go:172] (0xc00024c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.706721 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.706737 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.706747 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.707226 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.707246 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.707262 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.707271 1841 log.go:172] (0xc00024c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.707280 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.707289 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.711408 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.711426 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.711445 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.711780 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.711809 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.711825 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.711846 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.711860 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.711881 1841 log.go:172] (0xc00024c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.716088 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.716106 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.716122 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.716716 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.716730 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.716740 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.716758 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.716780 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.716793 1841 log.go:172] (0xc00024c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.722125 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.722140 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.722153 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.722636 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.722664 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.722677 1841 log.go:172] (0xc00024c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.722697 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.722711 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.722732 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.726935 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.726956 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.726986 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.727360 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.727376 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.727396 1841 log.go:172] (0xc00024c280) (5) Data frame sent\nI0530 00:32:14.727421 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.727435 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.727450 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.733765 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.733784 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.733796 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.734330 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.734362 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.734378 1841 log.go:172] (0xc00024c280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.38.59:80/\nI0530 00:32:14.734391 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.734406 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.734420 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.738785 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.738812 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.738833 1841 log.go:172] (0xc000516dc0) (3) Data frame sent\nI0530 00:32:14.739448 1841 log.go:172] (0xc0009b9340) Data frame received for 5\nI0530 00:32:14.739506 1841 log.go:172] (0xc00024c280) (5) Data frame handling\nI0530 00:32:14.739539 1841 log.go:172] (0xc0009b9340) Data frame received for 3\nI0530 00:32:14.739558 1841 log.go:172] (0xc000516dc0) (3) Data frame handling\nI0530 00:32:14.741533 1841 log.go:172] (0xc0009b9340) Data frame received for 1\nI0530 00:32:14.741554 1841 log.go:172] (0xc00090e5a0) (1) Data frame handling\nI0530 00:32:14.741563 1841 log.go:172] (0xc00090e5a0) (1) Data frame sent\nI0530 00:32:14.741575 1841 log.go:172] (0xc0009b9340) (0xc00090e5a0) Stream removed, broadcasting: 1\nI0530 00:32:14.741734 1841 log.go:172] (0xc0009b9340) Go away received\nI0530 00:32:14.741892 1841 log.go:172] (0xc0009b9340) (0xc00090e5a0) Stream removed, broadcasting: 1\nI0530 00:32:14.741906 1841 log.go:172] (0xc0009b9340) (0xc000516dc0) Stream removed, broadcasting: 3\nI0530 00:32:14.741912 1841 log.go:172] (0xc0009b9340) (0xc00024c280) Stream removed, broadcasting: 5\n" May 30 00:32:14.748: INFO: stdout: "\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc\naffinity-clusterip-szktc" May 30 00:32:14.748: INFO: Received response from host: May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Received response from host: affinity-clusterip-szktc May 30 00:32:14.748: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-9964, will wait for the garbage collector to delete the pods May 30 00:32:14.882: INFO: Deleting ReplicationController affinity-clusterip took: 6.145212ms May 30 00:32:15.383: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.310093ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:32:25.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9964" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:22.726 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":155,"skipped":2494,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:32:25.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:32:41.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4417" for this suite. • [SLOW TEST:16.154 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":156,"skipped":2507,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:32:41.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:32:45.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-400" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":157,"skipped":2518,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:32:45.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-40d9191f-a6a0-4b5f-9cf6-0772100eea4e in namespace container-probe-3938 May 30 00:32:49.780: INFO: Started pod liveness-40d9191f-a6a0-4b5f-9cf6-0772100eea4e in namespace container-probe-3938 STEP: checking the pod's current state and verifying that restartCount is present May 30 00:32:49.783: INFO: Initial restart count of pod liveness-40d9191f-a6a0-4b5f-9cf6-0772100eea4e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:36:50.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3938" for this suite. • [SLOW TEST:245.123 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":158,"skipped":2530,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:36:50.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 30 00:36:51.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 30 00:36:51.682: INFO: stderr: "" May 30 00:36:51.682: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:36:51.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3420" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":159,"skipped":2535,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:36:51.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:36:51.834: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13e8d1b4-d8ad-4135-8006-0eae0e206d32" in namespace "projected-8583" to be "Succeeded or Failed" May 30 00:36:51.851: INFO: Pod "downwardapi-volume-13e8d1b4-d8ad-4135-8006-0eae0e206d32": Phase="Pending", Reason="", readiness=false. Elapsed: 16.780145ms May 30 00:36:53.982: INFO: Pod "downwardapi-volume-13e8d1b4-d8ad-4135-8006-0eae0e206d32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147621716s May 30 00:36:55.993: INFO: Pod "downwardapi-volume-13e8d1b4-d8ad-4135-8006-0eae0e206d32": Phase="Running", Reason="", readiness=true. Elapsed: 4.158797747s May 30 00:36:57.998: INFO: Pod "downwardapi-volume-13e8d1b4-d8ad-4135-8006-0eae0e206d32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.163686266s STEP: Saw pod success May 30 00:36:57.998: INFO: Pod "downwardapi-volume-13e8d1b4-d8ad-4135-8006-0eae0e206d32" satisfied condition "Succeeded or Failed" May 30 00:36:58.001: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-13e8d1b4-d8ad-4135-8006-0eae0e206d32 container client-container: STEP: delete the pod May 30 00:36:58.072: INFO: Waiting for pod downwardapi-volume-13e8d1b4-d8ad-4135-8006-0eae0e206d32 to disappear May 30 00:36:58.077: INFO: Pod downwardapi-volume-13e8d1b4-d8ad-4135-8006-0eae0e206d32 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:36:58.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8583" for this suite. • [SLOW TEST:6.396 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":160,"skipped":2545,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:36:58.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 30 00:37:02.218: INFO: &Pod{ObjectMeta:{send-events-faf5fa50-4ab8-4cf6-b987-6d4b4c5360ae events-6727 /api/v1/namespaces/events-6727/pods/send-events-faf5fa50-4ab8-4cf6-b987-6d4b4c5360ae 71c85e2a-aea7-4c9e-8d5b-ae879efec469 8745792 0 2020-05-30 00:36:58 +0000 UTC map[name:foo time:146062067] map[] [] [] [{e2e.test Update v1 2020-05-30 00:36:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:37:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.154\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pjdll,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pjdll,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pjdll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:36:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:37:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:37:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:36:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.154,StartTime:2020-05-30 00:36:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 00:37:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://be655e347b68d15f1fbccede7c9140af0a6244687d0e30030e95f41132ee5b06,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.154,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 30 00:37:04.232: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 30 00:37:06.238: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:37:06.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6727" for this suite. • [SLOW TEST:8.201 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":161,"skipped":2547,"failed":0} SS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:37:06.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7408.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7408.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7408.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7408.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7408.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7408.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 00:37:12.570: INFO: DNS probes using dns-7408/dns-test-34b66f8c-8690-44a9-8db5-ba389ea9de61 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:37:12.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7408" for this suite. • [SLOW TEST:6.434 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":162,"skipped":2549,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:37:12.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 30 00:37:23.181: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 00:37:23.239: INFO: Pod pod-with-poststart-exec-hook still exists May 30 00:37:25.240: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 00:37:25.245: INFO: Pod pod-with-poststart-exec-hook still exists May 30 00:37:27.240: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 00:37:27.244: INFO: Pod pod-with-poststart-exec-hook still exists May 30 00:37:29.240: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 00:37:29.244: INFO: Pod pod-with-poststart-exec-hook still exists May 30 00:37:31.240: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 00:37:31.263: INFO: Pod pod-with-poststart-exec-hook still exists May 30 00:37:33.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 00:37:33.244: INFO: Pod pod-with-poststart-exec-hook still exists May 30 00:37:35.240: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 30 00:37:35.257: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:37:35.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5270" for this suite. • [SLOW TEST:22.542 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":163,"skipped":2556,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:37:35.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:38:09.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9561" for this suite. • [SLOW TEST:34.388 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":164,"skipped":2564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:38:09.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:40:09.786: INFO: Deleting pod "var-expansion-7da2aec5-f004-4ff7-ab0f-406fd3af6521" in namespace "var-expansion-1197" May 30 00:40:09.791: INFO: Wait up to 5m0s for pod "var-expansion-7da2aec5-f004-4ff7-ab0f-406fd3af6521" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:40:11.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1197" for this suite. • [SLOW TEST:122.202 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":165,"skipped":2603,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:40:11.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-87351ad2-409c-4039-a044-54dea5e4b60c STEP: Creating a pod to test consume secrets May 30 00:40:11.952: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4ced9b29-5df8-4017-bd41-18b1d1912924" in namespace "projected-1877" to be "Succeeded or Failed" May 30 00:40:11.974: INFO: Pod "pod-projected-secrets-4ced9b29-5df8-4017-bd41-18b1d1912924": Phase="Pending", Reason="", readiness=false. Elapsed: 22.409315ms May 30 00:40:13.978: INFO: Pod "pod-projected-secrets-4ced9b29-5df8-4017-bd41-18b1d1912924": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026587389s May 30 00:40:15.982: INFO: Pod "pod-projected-secrets-4ced9b29-5df8-4017-bd41-18b1d1912924": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030827095s STEP: Saw pod success May 30 00:40:15.982: INFO: Pod "pod-projected-secrets-4ced9b29-5df8-4017-bd41-18b1d1912924" satisfied condition "Succeeded or Failed" May 30 00:40:15.985: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-4ced9b29-5df8-4017-bd41-18b1d1912924 container projected-secret-volume-test: STEP: delete the pod May 30 00:40:16.051: INFO: Waiting for pod pod-projected-secrets-4ced9b29-5df8-4017-bd41-18b1d1912924 to disappear May 30 00:40:16.064: INFO: Pod pod-projected-secrets-4ced9b29-5df8-4017-bd41-18b1d1912924 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:40:16.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1877" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":166,"skipped":2614,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:40:16.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:40:16.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-493468de-2e8c-4de6-834f-ef20d35a1dcd" in namespace "projected-1191" to be "Succeeded or Failed" May 30 00:40:16.253: INFO: Pod "downwardapi-volume-493468de-2e8c-4de6-834f-ef20d35a1dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 56.498642ms May 30 00:40:18.307: INFO: Pod "downwardapi-volume-493468de-2e8c-4de6-834f-ef20d35a1dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11066261s May 30 00:40:20.331: INFO: Pod "downwardapi-volume-493468de-2e8c-4de6-834f-ef20d35a1dcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134835825s STEP: Saw pod success May 30 00:40:20.331: INFO: Pod "downwardapi-volume-493468de-2e8c-4de6-834f-ef20d35a1dcd" satisfied condition "Succeeded or Failed" May 30 00:40:20.335: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-493468de-2e8c-4de6-834f-ef20d35a1dcd container client-container: STEP: delete the pod May 30 00:40:20.371: INFO: Waiting for pod downwardapi-volume-493468de-2e8c-4de6-834f-ef20d35a1dcd to disappear May 30 00:40:20.387: INFO: Pod downwardapi-volume-493468de-2e8c-4de6-834f-ef20d35a1dcd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:40:20.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1191" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":167,"skipped":2634,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:40:20.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 30 00:40:21.109: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 30 00:40:23.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396021, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396021, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396021, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396021, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 00:40:26.325: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:40:26.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:40:27.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7831" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.366 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":168,"skipped":2634,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:40:27.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:40:27.859: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 30 00:40:27.881: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:27.897: INFO: Number of nodes with available pods: 0 May 30 00:40:27.897: INFO: Node latest-worker is running more than one daemon pod May 30 00:40:28.902: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:28.905: INFO: Number of nodes with available pods: 0 May 30 00:40:28.905: INFO: Node latest-worker is running more than one daemon pod May 30 00:40:29.903: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:29.907: INFO: Number of nodes with available pods: 0 May 30 00:40:29.907: INFO: Node latest-worker is running more than one daemon pod May 30 00:40:30.903: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:30.907: INFO: Number of nodes with available pods: 0 May 30 00:40:30.907: INFO: Node latest-worker is running more than one daemon pod May 30 00:40:31.903: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:31.907: INFO: Number of nodes with available pods: 1 May 30 00:40:31.907: INFO: Node latest-worker2 is running more than one daemon pod May 30 00:40:32.936: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:32.949: INFO: Number of nodes with available pods: 2 May 30 00:40:32.949: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 30 00:40:33.000: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:33.000: INFO: Wrong image for pod: daemon-set-vnrzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:33.056: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:34.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:34.061: INFO: Wrong image for pod: daemon-set-vnrzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:34.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:35.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:35.061: INFO: Wrong image for pod: daemon-set-vnrzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:35.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:36.062: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:36.062: INFO: Wrong image for pod: daemon-set-vnrzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:36.062: INFO: Pod daemon-set-vnrzh is not available May 30 00:40:36.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:37.062: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:37.062: INFO: Wrong image for pod: daemon-set-vnrzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:37.062: INFO: Pod daemon-set-vnrzh is not available May 30 00:40:37.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:38.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:38.061: INFO: Wrong image for pod: daemon-set-vnrzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:38.061: INFO: Pod daemon-set-vnrzh is not available May 30 00:40:38.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:39.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:39.061: INFO: Wrong image for pod: daemon-set-vnrzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:39.061: INFO: Pod daemon-set-vnrzh is not available May 30 00:40:39.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:40.062: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:40.062: INFO: Wrong image for pod: daemon-set-vnrzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:40.062: INFO: Pod daemon-set-vnrzh is not available May 30 00:40:40.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:41.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:41.061: INFO: Wrong image for pod: daemon-set-vnrzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:41.061: INFO: Pod daemon-set-vnrzh is not available May 30 00:40:41.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:42.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:42.062: INFO: Wrong image for pod: daemon-set-vnrzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:42.062: INFO: Pod daemon-set-vnrzh is not available May 30 00:40:42.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:43.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:43.062: INFO: Wrong image for pod: daemon-set-vnrzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:43.062: INFO: Pod daemon-set-vnrzh is not available May 30 00:40:43.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:44.062: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:44.062: INFO: Wrong image for pod: daemon-set-vnrzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:44.062: INFO: Pod daemon-set-vnrzh is not available May 30 00:40:44.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:45.060: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:45.060: INFO: Pod daemon-set-t5gzm is not available May 30 00:40:45.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:46.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:46.061: INFO: Pod daemon-set-t5gzm is not available May 30 00:40:46.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:47.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:47.061: INFO: Pod daemon-set-t5gzm is not available May 30 00:40:47.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:48.060: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:48.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:49.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:49.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:50.060: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:50.060: INFO: Pod daemon-set-p6br9 is not available May 30 00:40:50.064: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:51.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:51.061: INFO: Pod daemon-set-p6br9 is not available May 30 00:40:51.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:52.091: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:52.091: INFO: Pod daemon-set-p6br9 is not available May 30 00:40:52.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:53.060: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:53.060: INFO: Pod daemon-set-p6br9 is not available May 30 00:40:53.064: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:54.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:54.061: INFO: Pod daemon-set-p6br9 is not available May 30 00:40:54.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:55.061: INFO: Wrong image for pod: daemon-set-p6br9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 30 00:40:55.061: INFO: Pod daemon-set-p6br9 is not available May 30 00:40:55.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:56.062: INFO: Pod daemon-set-w8cxj is not available May 30 00:40:56.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 30 00:40:56.071: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:56.074: INFO: Number of nodes with available pods: 1 May 30 00:40:56.074: INFO: Node latest-worker2 is running more than one daemon pod May 30 00:40:57.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:57.083: INFO: Number of nodes with available pods: 1 May 30 00:40:57.083: INFO: Node latest-worker2 is running more than one daemon pod May 30 00:40:58.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:58.083: INFO: Number of nodes with available pods: 1 May 30 00:40:58.083: INFO: Node latest-worker2 is running more than one daemon pod May 30 00:40:59.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 00:40:59.085: INFO: Number of nodes with available pods: 2 May 30 00:40:59.085: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1631, will wait for the garbage collector to delete the pods May 30 00:40:59.160: INFO: Deleting DaemonSet.extensions daemon-set took: 6.727964ms May 30 00:40:59.560: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.290179ms May 30 00:41:05.312: INFO: Number of nodes with available pods: 0 May 30 00:41:05.312: INFO: Number of running nodes: 0, number of available pods: 0 May 30 00:41:05.315: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1631/daemonsets","resourceVersion":"8746921"},"items":null} May 30 00:41:05.317: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1631/pods","resourceVersion":"8746921"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:41:05.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1631" for this suite. • [SLOW TEST:37.552 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":169,"skipped":2653,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:41:05.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 30 00:41:05.466: INFO: Waiting up to 5m0s for pod "downward-api-4acd35fa-a200-43dc-b033-cb0efcae258a" in namespace "downward-api-1293" to be "Succeeded or Failed" May 30 00:41:05.486: INFO: Pod "downward-api-4acd35fa-a200-43dc-b033-cb0efcae258a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.771723ms May 30 00:41:07.490: INFO: Pod "downward-api-4acd35fa-a200-43dc-b033-cb0efcae258a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024564907s May 30 00:41:09.495: INFO: Pod "downward-api-4acd35fa-a200-43dc-b033-cb0efcae258a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029193328s STEP: Saw pod success May 30 00:41:09.495: INFO: Pod "downward-api-4acd35fa-a200-43dc-b033-cb0efcae258a" satisfied condition "Succeeded or Failed" May 30 00:41:09.498: INFO: Trying to get logs from node latest-worker pod downward-api-4acd35fa-a200-43dc-b033-cb0efcae258a container dapi-container: STEP: delete the pod May 30 00:41:09.549: INFO: Waiting for pod downward-api-4acd35fa-a200-43dc-b033-cb0efcae258a to disappear May 30 00:41:09.557: INFO: Pod downward-api-4acd35fa-a200-43dc-b033-cb0efcae258a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:41:09.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1293" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":170,"skipped":2654,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:41:09.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-d6611c73-2e53-44a4-81e8-13e7fe00179d STEP: Creating a pod to test consume configMaps May 30 00:41:09.637: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-38f0cbc9-1a25-44a1-bfe9-fc24cad88d78" in namespace "projected-4343" to be "Succeeded or Failed" May 30 00:41:09.653: INFO: Pod "pod-projected-configmaps-38f0cbc9-1a25-44a1-bfe9-fc24cad88d78": Phase="Pending", Reason="", readiness=false. Elapsed: 15.174957ms May 30 00:41:11.658: INFO: Pod "pod-projected-configmaps-38f0cbc9-1a25-44a1-bfe9-fc24cad88d78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02016906s May 30 00:41:13.663: INFO: Pod "pod-projected-configmaps-38f0cbc9-1a25-44a1-bfe9-fc24cad88d78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025160175s STEP: Saw pod success May 30 00:41:13.663: INFO: Pod "pod-projected-configmaps-38f0cbc9-1a25-44a1-bfe9-fc24cad88d78" satisfied condition "Succeeded or Failed" May 30 00:41:13.666: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-38f0cbc9-1a25-44a1-bfe9-fc24cad88d78 container projected-configmap-volume-test: STEP: delete the pod May 30 00:41:13.714: INFO: Waiting for pod pod-projected-configmaps-38f0cbc9-1a25-44a1-bfe9-fc24cad88d78 to disappear May 30 00:41:13.716: INFO: Pod pod-projected-configmaps-38f0cbc9-1a25-44a1-bfe9-fc24cad88d78 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:41:13.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4343" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":171,"skipped":2668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:41:13.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 30 00:41:13.807: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 30 00:41:14.493: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 30 00:41:16.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396074, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396074, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396074, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396074, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 00:41:18.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396074, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396074, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396074, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396074, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 00:41:21.758: INFO: Waited 929.415945ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:41:22.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3837" for this suite. • [SLOW TEST:8.593 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":172,"skipped":2701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:41:22.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-sb7n STEP: Creating a pod to test atomic-volume-subpath May 30 00:41:22.806: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sb7n" in namespace "subpath-2010" to be "Succeeded or Failed" May 30 00:41:23.065: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Pending", Reason="", readiness=false. Elapsed: 258.475626ms May 30 00:41:25.092: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285421829s May 30 00:41:27.096: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Running", Reason="", readiness=true. Elapsed: 4.290024129s May 30 00:41:29.101: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Running", Reason="", readiness=true. Elapsed: 6.294958875s May 30 00:41:31.106: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Running", Reason="", readiness=true. Elapsed: 8.299436393s May 30 00:41:33.110: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Running", Reason="", readiness=true. Elapsed: 10.303788807s May 30 00:41:35.114: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Running", Reason="", readiness=true. Elapsed: 12.307672569s May 30 00:41:37.118: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Running", Reason="", readiness=true. Elapsed: 14.311296106s May 30 00:41:39.121: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Running", Reason="", readiness=true. Elapsed: 16.314806873s May 30 00:41:41.125: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Running", Reason="", readiness=true. Elapsed: 18.318818489s May 30 00:41:43.130: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Running", Reason="", readiness=true. Elapsed: 20.323473754s May 30 00:41:45.134: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Running", Reason="", readiness=true. Elapsed: 22.327146127s May 30 00:41:47.151: INFO: Pod "pod-subpath-test-configmap-sb7n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.345018332s STEP: Saw pod success May 30 00:41:47.151: INFO: Pod "pod-subpath-test-configmap-sb7n" satisfied condition "Succeeded or Failed" May 30 00:41:47.155: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-sb7n container test-container-subpath-configmap-sb7n: STEP: delete the pod May 30 00:41:47.191: INFO: Waiting for pod pod-subpath-test-configmap-sb7n to disappear May 30 00:41:47.206: INFO: Pod pod-subpath-test-configmap-sb7n no longer exists STEP: Deleting pod pod-subpath-test-configmap-sb7n May 30 00:41:47.206: INFO: Deleting pod "pod-subpath-test-configmap-sb7n" in namespace "subpath-2010" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:41:47.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2010" for this suite. • [SLOW TEST:24.920 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":173,"skipped":2756,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:41:47.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 30 00:41:47.311: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:42:03.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5998" for this suite. • [SLOW TEST:16.050 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":174,"skipped":2764,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:42:03.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 30 00:42:04.059: INFO: Pod name wrapped-volume-race-d868cef5-ddda-4752-834c-677b8ea7c6d6: Found 0 pods out of 5 May 30 00:42:09.069: INFO: Pod name wrapped-volume-race-d868cef5-ddda-4752-834c-677b8ea7c6d6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d868cef5-ddda-4752-834c-677b8ea7c6d6 in namespace emptydir-wrapper-7125, will wait for the garbage collector to delete the pods May 30 00:42:21.239: INFO: Deleting ReplicationController wrapped-volume-race-d868cef5-ddda-4752-834c-677b8ea7c6d6 took: 7.194722ms May 30 00:42:21.739: INFO: Terminating ReplicationController wrapped-volume-race-d868cef5-ddda-4752-834c-677b8ea7c6d6 pods took: 500.275236ms STEP: Creating RC which spawns configmap-volume pods May 30 00:42:35.626: INFO: Pod name wrapped-volume-race-610851a0-2d94-4500-9791-fe039baf582b: Found 0 pods out of 5 May 30 00:42:40.635: INFO: Pod name wrapped-volume-race-610851a0-2d94-4500-9791-fe039baf582b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-610851a0-2d94-4500-9791-fe039baf582b in namespace emptydir-wrapper-7125, will wait for the garbage collector to delete the pods May 30 00:42:54.733: INFO: Deleting ReplicationController wrapped-volume-race-610851a0-2d94-4500-9791-fe039baf582b took: 8.548535ms May 30 00:42:55.034: INFO: Terminating ReplicationController wrapped-volume-race-610851a0-2d94-4500-9791-fe039baf582b pods took: 300.307399ms STEP: Creating RC which spawns configmap-volume pods May 30 00:43:05.164: INFO: Pod name wrapped-volume-race-ab229a21-30fd-4f2f-a476-80416e48f4ee: Found 0 pods out of 5 May 30 00:43:10.173: INFO: Pod name wrapped-volume-race-ab229a21-30fd-4f2f-a476-80416e48f4ee: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ab229a21-30fd-4f2f-a476-80416e48f4ee in namespace emptydir-wrapper-7125, will wait for the garbage collector to delete the pods May 30 00:43:24.330: INFO: Deleting ReplicationController wrapped-volume-race-ab229a21-30fd-4f2f-a476-80416e48f4ee took: 13.832125ms May 30 00:43:24.730: INFO: Terminating ReplicationController wrapped-volume-race-ab229a21-30fd-4f2f-a476-80416e48f4ee pods took: 400.283472ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:43:35.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7125" for this suite. • [SLOW TEST:92.411 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":175,"skipped":2765,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:43:35.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:43:35.744: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70a52957-4b28-41e2-90cf-78cbadc4edcf" in namespace "downward-api-8162" to be "Succeeded or Failed" May 30 00:43:35.764: INFO: Pod "downwardapi-volume-70a52957-4b28-41e2-90cf-78cbadc4edcf": Phase="Pending", Reason="", readiness=false. Elapsed: 19.954895ms May 30 00:43:37.768: INFO: Pod "downwardapi-volume-70a52957-4b28-41e2-90cf-78cbadc4edcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024703674s May 30 00:43:39.773: INFO: Pod "downwardapi-volume-70a52957-4b28-41e2-90cf-78cbadc4edcf": Phase="Running", Reason="", readiness=true. Elapsed: 4.029544032s May 30 00:43:41.961: INFO: Pod "downwardapi-volume-70a52957-4b28-41e2-90cf-78cbadc4edcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.217164995s STEP: Saw pod success May 30 00:43:41.961: INFO: Pod "downwardapi-volume-70a52957-4b28-41e2-90cf-78cbadc4edcf" satisfied condition "Succeeded or Failed" May 30 00:43:41.994: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-70a52957-4b28-41e2-90cf-78cbadc4edcf container client-container: STEP: delete the pod May 30 00:43:42.203: INFO: Waiting for pod downwardapi-volume-70a52957-4b28-41e2-90cf-78cbadc4edcf to disappear May 30 00:43:42.208: INFO: Pod downwardapi-volume-70a52957-4b28-41e2-90cf-78cbadc4edcf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:43:42.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8162" for this suite. • [SLOW TEST:6.598 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":176,"skipped":2777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:43:42.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:43:42.365: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02874bbc-8f83-41e5-b13d-963e254f20c4" in namespace "downward-api-4779" to be "Succeeded or Failed" May 30 00:43:42.419: INFO: Pod "downwardapi-volume-02874bbc-8f83-41e5-b13d-963e254f20c4": Phase="Pending", Reason="", readiness=false. Elapsed: 53.665467ms May 30 00:43:44.428: INFO: Pod "downwardapi-volume-02874bbc-8f83-41e5-b13d-963e254f20c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06278635s May 30 00:43:46.432: INFO: Pod "downwardapi-volume-02874bbc-8f83-41e5-b13d-963e254f20c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0671331s STEP: Saw pod success May 30 00:43:46.432: INFO: Pod "downwardapi-volume-02874bbc-8f83-41e5-b13d-963e254f20c4" satisfied condition "Succeeded or Failed" May 30 00:43:46.436: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-02874bbc-8f83-41e5-b13d-963e254f20c4 container client-container: STEP: delete the pod May 30 00:43:46.526: INFO: Waiting for pod downwardapi-volume-02874bbc-8f83-41e5-b13d-963e254f20c4 to disappear May 30 00:43:46.613: INFO: Pod downwardapi-volume-02874bbc-8f83-41e5-b13d-963e254f20c4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:43:46.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4779" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":177,"skipped":2800,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:43:46.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:43:46.824: INFO: Creating deployment "test-recreate-deployment" May 30 00:43:46.839: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 30 00:43:46.902: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 30 00:43:48.908: INFO: Waiting deployment "test-recreate-deployment" to complete May 30 00:43:48.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396226, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396226, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396226, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396226, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 00:43:50.934: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 30 00:43:50.942: INFO: Updating deployment test-recreate-deployment May 30 00:43:50.942: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 30 00:43:51.566: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-8653 /apis/apps/v1/namespaces/deployment-8653/deployments/test-recreate-deployment 9b2908d5-0171-47ec-bcf9-d4bb61d67131 8748527 2 2020-05-30 00:43:46 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-30 00:43:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-30 00:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00518be08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-30 00:43:51 +0000 UTC,LastTransitionTime:2020-05-30 00:43:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-30 00:43:51 +0000 UTC,LastTransitionTime:2020-05-30 00:43:46 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 30 00:43:51.570: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-8653 /apis/apps/v1/namespaces/deployment-8653/replicasets/test-recreate-deployment-d5667d9c7 107623d5-165e-4f5b-9222-dc27565d6ca6 8748524 1 2020-05-30 00:43:51 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 9b2908d5-0171-47ec-bcf9-d4bb61d67131 0xc00502a450 0xc00502a451}] [] [{kube-controller-manager Update apps/v1 2020-05-30 00:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b2908d5-0171-47ec-bcf9-d4bb61d67131\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00502a4c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 00:43:51.570: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 30 00:43:51.570: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-8653 /apis/apps/v1/namespaces/deployment-8653/replicasets/test-recreate-deployment-6d65b9f6d8 2177f241-5d4e-4d82-82c0-c7daba31560a 8748513 2 2020-05-30 00:43:46 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 9b2908d5-0171-47ec-bcf9-d4bb61d67131 0xc00502a357 0xc00502a358}] [] [{kube-controller-manager Update apps/v1 2020-05-30 00:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b2908d5-0171-47ec-bcf9-d4bb61d67131\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00502a3e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 00:43:51.600: INFO: Pod "test-recreate-deployment-d5667d9c7-pqxxd" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-pqxxd test-recreate-deployment-d5667d9c7- deployment-8653 /api/v1/namespaces/deployment-8653/pods/test-recreate-deployment-d5667d9c7-pqxxd 53f92091-ee65-484f-8ad0-ede71bf17d91 8748528 0 2020-05-30 00:43:51 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 107623d5-165e-4f5b-9222-dc27565d6ca6 0xc00510bce0 0xc00510bce1}] [] [{kube-controller-manager Update v1 2020-05-30 00:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"107623d5-165e-4f5b-9222-dc27565d6ca6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:43:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b78p9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b78p9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b78p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:43:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:43:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:43:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:43:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-30 00:43:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:43:51.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8653" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":178,"skipped":2821,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:43:51.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 30 00:43:58.967: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:44:00.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1769" for this suite. • [SLOW TEST:8.477 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":179,"skipped":2825,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:44:00.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:44:00.222: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 30 00:44:02.407: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:44:03.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-143" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":180,"skipped":2835,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:44:03.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 30 00:44:04.596: INFO: Waiting up to 5m0s for pod "pod-9011e87a-6cb1-4bb8-b019-ff05073228d9" in namespace "emptydir-9419" to be "Succeeded or Failed" May 30 00:44:04.758: INFO: Pod "pod-9011e87a-6cb1-4bb8-b019-ff05073228d9": Phase="Pending", Reason="", readiness=false. Elapsed: 161.740418ms May 30 00:44:06.823: INFO: Pod "pod-9011e87a-6cb1-4bb8-b019-ff05073228d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227128208s May 30 00:44:08.835: INFO: Pod "pod-9011e87a-6cb1-4bb8-b019-ff05073228d9": Phase="Running", Reason="", readiness=true. Elapsed: 4.239218335s May 30 00:44:10.840: INFO: Pod "pod-9011e87a-6cb1-4bb8-b019-ff05073228d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.24415071s STEP: Saw pod success May 30 00:44:10.840: INFO: Pod "pod-9011e87a-6cb1-4bb8-b019-ff05073228d9" satisfied condition "Succeeded or Failed" May 30 00:44:10.843: INFO: Trying to get logs from node latest-worker pod pod-9011e87a-6cb1-4bb8-b019-ff05073228d9 container test-container: STEP: delete the pod May 30 00:44:10.882: INFO: Waiting for pod pod-9011e87a-6cb1-4bb8-b019-ff05073228d9 to disappear May 30 00:44:10.887: INFO: Pod pod-9011e87a-6cb1-4bb8-b019-ff05073228d9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:44:10.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9419" for this suite. • [SLOW TEST:7.440 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":181,"skipped":2844,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:44:10.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5568 May 30 00:44:15.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5568 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 30 00:44:17.983: INFO: stderr: "I0530 00:44:17.895435 1882 log.go:172] (0xc000cfe790) (0xc0003e8500) Create stream\nI0530 00:44:17.895474 1882 log.go:172] (0xc000cfe790) (0xc0003e8500) Stream added, broadcasting: 1\nI0530 00:44:17.898421 1882 log.go:172] (0xc000cfe790) Reply frame received for 1\nI0530 00:44:17.898480 1882 log.go:172] (0xc000cfe790) (0xc000348140) Create stream\nI0530 00:44:17.898497 1882 log.go:172] (0xc000cfe790) (0xc000348140) Stream added, broadcasting: 3\nI0530 00:44:17.899474 1882 log.go:172] (0xc000cfe790) Reply frame received for 3\nI0530 00:44:17.899522 1882 log.go:172] (0xc000cfe790) (0xc0004921e0) Create stream\nI0530 00:44:17.899538 1882 log.go:172] (0xc000cfe790) (0xc0004921e0) Stream added, broadcasting: 5\nI0530 00:44:17.900387 1882 log.go:172] (0xc000cfe790) Reply frame received for 5\nI0530 00:44:17.969018 1882 log.go:172] (0xc000cfe790) Data frame received for 5\nI0530 00:44:17.969057 1882 log.go:172] (0xc0004921e0) (5) Data frame handling\nI0530 00:44:17.969087 1882 log.go:172] (0xc0004921e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0530 00:44:17.974171 1882 log.go:172] (0xc000cfe790) Data frame received for 3\nI0530 00:44:17.974201 1882 log.go:172] (0xc000348140) (3) Data frame handling\nI0530 00:44:17.974224 1882 log.go:172] (0xc000348140) (3) Data frame sent\nI0530 00:44:17.974503 1882 log.go:172] (0xc000cfe790) Data frame received for 5\nI0530 00:44:17.974585 1882 log.go:172] (0xc0004921e0) (5) Data frame handling\nI0530 00:44:17.974614 1882 log.go:172] (0xc000cfe790) Data frame received for 3\nI0530 00:44:17.974625 1882 log.go:172] (0xc000348140) (3) Data frame handling\nI0530 00:44:17.976520 1882 log.go:172] (0xc000cfe790) Data frame received for 1\nI0530 00:44:17.976544 1882 log.go:172] (0xc0003e8500) (1) Data frame handling\nI0530 00:44:17.976557 1882 log.go:172] (0xc0003e8500) (1) Data frame sent\nI0530 00:44:17.976581 1882 log.go:172] (0xc000cfe790) (0xc0003e8500) Stream removed, broadcasting: 1\nI0530 00:44:17.976616 1882 log.go:172] (0xc000cfe790) Go away received\nI0530 00:44:17.977428 1882 log.go:172] (0xc000cfe790) (0xc0003e8500) Stream removed, broadcasting: 1\nI0530 00:44:17.977454 1882 log.go:172] (0xc000cfe790) (0xc000348140) Stream removed, broadcasting: 3\nI0530 00:44:17.977479 1882 log.go:172] (0xc000cfe790) (0xc0004921e0) Stream removed, broadcasting: 5\n" May 30 00:44:17.984: INFO: stdout: "iptables" May 30 00:44:17.984: INFO: proxyMode: iptables May 30 00:44:17.989: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 30 00:44:18.009: INFO: Pod kube-proxy-mode-detector still exists May 30 00:44:20.009: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 30 00:44:20.014: INFO: Pod kube-proxy-mode-detector still exists May 30 00:44:22.009: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 30 00:44:22.014: INFO: Pod kube-proxy-mode-detector still exists May 30 00:44:24.009: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 30 00:44:24.014: INFO: Pod kube-proxy-mode-detector still exists May 30 00:44:26.009: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 30 00:44:26.013: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-5568 STEP: creating replication controller affinity-clusterip-timeout in namespace services-5568 I0530 00:44:26.069627 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-5568, replica count: 3 I0530 00:44:29.120035 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:44:32.120335 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 00:44:32.161: INFO: Creating new exec pod May 30 00:44:37.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5568 execpod-affinitynxzlb -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 30 00:44:37.543: INFO: stderr: "I0530 00:44:37.328375 1917 log.go:172] (0xc000bcd340) (0xc000b283c0) Create stream\nI0530 00:44:37.328460 1917 log.go:172] (0xc000bcd340) (0xc000b283c0) Stream added, broadcasting: 1\nI0530 00:44:37.333407 1917 log.go:172] (0xc000bcd340) Reply frame received for 1\nI0530 00:44:37.333486 1917 log.go:172] (0xc000bcd340) (0xc00072edc0) Create stream\nI0530 00:44:37.333521 1917 log.go:172] (0xc000bcd340) (0xc00072edc0) Stream added, broadcasting: 3\nI0530 00:44:37.334582 1917 log.go:172] (0xc000bcd340) Reply frame received for 3\nI0530 00:44:37.334638 1917 log.go:172] (0xc000bcd340) (0xc00069a460) Create stream\nI0530 00:44:37.334655 1917 log.go:172] (0xc000bcd340) (0xc00069a460) Stream added, broadcasting: 5\nI0530 00:44:37.335383 1917 log.go:172] (0xc000bcd340) Reply frame received for 5\nI0530 00:44:37.531988 1917 log.go:172] (0xc000bcd340) Data frame received for 5\nI0530 00:44:37.532009 1917 log.go:172] (0xc00069a460) (5) Data frame handling\nI0530 00:44:37.532023 1917 log.go:172] (0xc00069a460) (5) Data frame sent\nI0530 00:44:37.532032 1917 log.go:172] (0xc000bcd340) Data frame received for 5\nI0530 00:44:37.532037 1917 log.go:172] (0xc00069a460) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0530 00:44:37.532051 1917 log.go:172] (0xc00069a460) (5) Data frame sent\nI0530 00:44:37.532329 1917 log.go:172] (0xc000bcd340) Data frame received for 3\nI0530 00:44:37.532363 1917 log.go:172] (0xc00072edc0) (3) Data frame handling\nI0530 00:44:37.532618 1917 log.go:172] (0xc000bcd340) Data frame received for 5\nI0530 00:44:37.532649 1917 log.go:172] (0xc00069a460) (5) Data frame handling\nI0530 00:44:37.538559 1917 log.go:172] (0xc000bcd340) Data frame received for 1\nI0530 00:44:37.538575 1917 log.go:172] (0xc000b283c0) (1) Data frame handling\nI0530 00:44:37.538592 1917 log.go:172] (0xc000b283c0) (1) Data frame sent\nI0530 00:44:37.538607 1917 log.go:172] (0xc000bcd340) (0xc000b283c0) Stream removed, broadcasting: 1\nI0530 00:44:37.538629 1917 log.go:172] (0xc000bcd340) Go away received\nI0530 00:44:37.538862 1917 log.go:172] (0xc000bcd340) (0xc000b283c0) Stream removed, broadcasting: 1\nI0530 00:44:37.538875 1917 log.go:172] (0xc000bcd340) (0xc00072edc0) Stream removed, broadcasting: 3\nI0530 00:44:37.538880 1917 log.go:172] (0xc000bcd340) (0xc00069a460) Stream removed, broadcasting: 5\n" May 30 00:44:37.543: INFO: stdout: "" May 30 00:44:37.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5568 execpod-affinitynxzlb -- /bin/sh -x -c nc -zv -t -w 2 10.107.57.158 80' May 30 00:44:37.740: INFO: stderr: "I0530 00:44:37.666834 1937 log.go:172] (0xc0009b7970) (0xc0006b1f40) Create stream\nI0530 00:44:37.666919 1937 log.go:172] (0xc0009b7970) (0xc0006b1f40) Stream added, broadcasting: 1\nI0530 00:44:37.671557 1937 log.go:172] (0xc0009b7970) Reply frame received for 1\nI0530 00:44:37.671607 1937 log.go:172] (0xc0009b7970) (0xc0006728c0) Create stream\nI0530 00:44:37.671624 1937 log.go:172] (0xc0009b7970) (0xc0006728c0) Stream added, broadcasting: 3\nI0530 00:44:37.672303 1937 log.go:172] (0xc0009b7970) Reply frame received for 3\nI0530 00:44:37.672330 1937 log.go:172] (0xc0009b7970) (0xc000602000) Create stream\nI0530 00:44:37.672345 1937 log.go:172] (0xc0009b7970) (0xc000602000) Stream added, broadcasting: 5\nI0530 00:44:37.672902 1937 log.go:172] (0xc0009b7970) Reply frame received for 5\nI0530 00:44:37.735768 1937 log.go:172] (0xc0009b7970) Data frame received for 3\nI0530 00:44:37.735790 1937 log.go:172] (0xc0006728c0) (3) Data frame handling\nI0530 00:44:37.735804 1937 log.go:172] (0xc0009b7970) Data frame received for 5\nI0530 00:44:37.735809 1937 log.go:172] (0xc000602000) (5) Data frame handling\nI0530 00:44:37.735814 1937 log.go:172] (0xc000602000) (5) Data frame sent\nI0530 00:44:37.735818 1937 log.go:172] (0xc0009b7970) Data frame received for 5\nI0530 00:44:37.735822 1937 log.go:172] (0xc000602000) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.57.158 80\nConnection to 10.107.57.158 80 port [tcp/http] succeeded!\nI0530 00:44:37.736761 1937 log.go:172] (0xc0009b7970) Data frame received for 1\nI0530 00:44:37.736776 1937 log.go:172] (0xc0006b1f40) (1) Data frame handling\nI0530 00:44:37.736788 1937 log.go:172] (0xc0006b1f40) (1) Data frame sent\nI0530 00:44:37.736799 1937 log.go:172] (0xc0009b7970) (0xc0006b1f40) Stream removed, broadcasting: 1\nI0530 00:44:37.736811 1937 log.go:172] (0xc0009b7970) Go away received\nI0530 00:44:37.737094 1937 log.go:172] (0xc0009b7970) (0xc0006b1f40) Stream removed, broadcasting: 1\nI0530 00:44:37.737221 1937 log.go:172] (0xc0009b7970) (0xc0006728c0) Stream removed, broadcasting: 3\nI0530 00:44:37.737233 1937 log.go:172] (0xc0009b7970) (0xc000602000) Stream removed, broadcasting: 5\n" May 30 00:44:37.740: INFO: stdout: "" May 30 00:44:37.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5568 execpod-affinitynxzlb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.57.158:80/ ; done' May 30 00:44:38.026: INFO: stderr: "I0530 00:44:37.868928 1958 log.go:172] (0xc00095c000) (0xc000b7a000) Create stream\nI0530 00:44:37.868980 1958 log.go:172] (0xc00095c000) (0xc000b7a000) Stream added, broadcasting: 1\nI0530 00:44:37.870506 1958 log.go:172] (0xc00095c000) Reply frame received for 1\nI0530 00:44:37.870529 1958 log.go:172] (0xc00095c000) (0xc00085c640) Create stream\nI0530 00:44:37.870536 1958 log.go:172] (0xc00095c000) (0xc00085c640) Stream added, broadcasting: 3\nI0530 00:44:37.871063 1958 log.go:172] (0xc00095c000) Reply frame received for 3\nI0530 00:44:37.871093 1958 log.go:172] (0xc00095c000) (0xc0006d25a0) Create stream\nI0530 00:44:37.871100 1958 log.go:172] (0xc00095c000) (0xc0006d25a0) Stream added, broadcasting: 5\nI0530 00:44:37.871618 1958 log.go:172] (0xc00095c000) Reply frame received for 5\nI0530 00:44:37.941391 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.941433 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.941445 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:37.941466 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.941474 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.941482 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.945352 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.945392 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.945411 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.945736 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.945760 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.945784 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:37.945954 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.945967 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.945983 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.949330 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.949346 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.949355 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.949651 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.949666 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.949674 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:37.949755 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.949782 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.949819 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.953961 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.953976 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.953997 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.954326 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.954344 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.954362 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.954376 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.954391 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:37.954401 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.959052 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.959082 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.959102 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.959792 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.959818 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.959830 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.959844 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.959852 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.959861 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:37.965541 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.965569 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.965602 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.965787 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.965813 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.965823 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.965835 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.965843 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.965850 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:37.968861 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.968879 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.968898 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.969394 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.969423 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.969434 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:37.969448 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.969467 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.969479 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.972484 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.972517 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.972545 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.973696 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.973719 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.973750 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.973780 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.973806 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.973846 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:37.976841 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.976858 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.976873 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.977989 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.978008 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.978015 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:37.978026 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.978031 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.978037 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.981598 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.981617 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.981650 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.981808 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.981845 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.981858 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:37.981871 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.981881 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.981898 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.984818 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.984851 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.984899 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.985442 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.985471 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.985506 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:37.985669 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.985711 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.985746 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.991102 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.991128 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.991146 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.992022 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.992041 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.992062 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:37.992086 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.992112 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.992129 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.997030 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.997432 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.997479 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.997513 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:37.997527 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:37.997537 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:37.997557 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:37.997575 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:37.997583 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:38.001943 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:38.001986 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:38.002043 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:38.002350 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:38.002388 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:38.002431 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:38.002465 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:38.002488 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:38.002524 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:38.006989 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:38.007030 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:38.007069 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:38.007648 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:38.007672 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:38.007701 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:38.007758 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:38.007784 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:38.007815 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\nI0530 00:44:38.011134 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:38.011159 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:38.011180 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:38.011438 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:38.011452 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:38.011464 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0530 00:44:38.011472 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:38.011493 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:38.011503 1958 log.go:172] (0xc0006d25a0) (5) Data frame sent\n http://10.107.57.158:80/\nI0530 00:44:38.011527 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:38.011555 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:38.011588 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:38.016576 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:38.016590 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:38.016600 1958 log.go:172] (0xc00085c640) (3) Data frame sent\nI0530 00:44:38.017596 1958 log.go:172] (0xc00095c000) Data frame received for 3\nI0530 00:44:38.017623 1958 log.go:172] (0xc00085c640) (3) Data frame handling\nI0530 00:44:38.017685 1958 log.go:172] (0xc00095c000) Data frame received for 5\nI0530 00:44:38.017702 1958 log.go:172] (0xc0006d25a0) (5) Data frame handling\nI0530 00:44:38.021294 1958 log.go:172] (0xc00095c000) Data frame received for 1\nI0530 00:44:38.021313 1958 log.go:172] (0xc000b7a000) (1) Data frame handling\nI0530 00:44:38.021329 1958 log.go:172] (0xc000b7a000) (1) Data frame sent\nI0530 00:44:38.021342 1958 log.go:172] (0xc00095c000) (0xc000b7a000) Stream removed, broadcasting: 1\nI0530 00:44:38.021355 1958 log.go:172] (0xc00095c000) Go away received\nI0530 00:44:38.021669 1958 log.go:172] (0xc00095c000) (0xc000b7a000) Stream removed, broadcasting: 1\nI0530 00:44:38.021688 1958 log.go:172] (0xc00095c000) (0xc00085c640) Stream removed, broadcasting: 3\nI0530 00:44:38.021696 1958 log.go:172] (0xc00095c000) (0xc0006d25a0) Stream removed, broadcasting: 5\n" May 30 00:44:38.027: INFO: stdout: "\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r\naffinity-clusterip-timeout-rbb9r" May 30 00:44:38.027: INFO: Received response from host: May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Received response from host: affinity-clusterip-timeout-rbb9r May 30 00:44:38.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5568 execpod-affinitynxzlb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.107.57.158:80/' May 30 00:44:38.228: INFO: stderr: "I0530 00:44:38.150334 1978 log.go:172] (0xc0009e8dc0) (0xc000bb81e0) Create stream\nI0530 00:44:38.150401 1978 log.go:172] (0xc0009e8dc0) (0xc000bb81e0) Stream added, broadcasting: 1\nI0530 00:44:38.155616 1978 log.go:172] (0xc0009e8dc0) Reply frame received for 1\nI0530 00:44:38.155655 1978 log.go:172] (0xc0009e8dc0) (0xc0005bcf00) Create stream\nI0530 00:44:38.155669 1978 log.go:172] (0xc0009e8dc0) (0xc0005bcf00) Stream added, broadcasting: 3\nI0530 00:44:38.156466 1978 log.go:172] (0xc0009e8dc0) Reply frame received for 3\nI0530 00:44:38.156512 1978 log.go:172] (0xc0009e8dc0) (0xc000306320) Create stream\nI0530 00:44:38.156528 1978 log.go:172] (0xc0009e8dc0) (0xc000306320) Stream added, broadcasting: 5\nI0530 00:44:38.157307 1978 log.go:172] (0xc0009e8dc0) Reply frame received for 5\nI0530 00:44:38.216536 1978 log.go:172] (0xc0009e8dc0) Data frame received for 5\nI0530 00:44:38.216564 1978 log.go:172] (0xc000306320) (5) Data frame handling\nI0530 00:44:38.216585 1978 log.go:172] (0xc000306320) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:38.218682 1978 log.go:172] (0xc0009e8dc0) Data frame received for 3\nI0530 00:44:38.218701 1978 log.go:172] (0xc0005bcf00) (3) Data frame handling\nI0530 00:44:38.218716 1978 log.go:172] (0xc0005bcf00) (3) Data frame sent\nI0530 00:44:38.219128 1978 log.go:172] (0xc0009e8dc0) Data frame received for 3\nI0530 00:44:38.219147 1978 log.go:172] (0xc0005bcf00) (3) Data frame handling\nI0530 00:44:38.219246 1978 log.go:172] (0xc0009e8dc0) Data frame received for 5\nI0530 00:44:38.219278 1978 log.go:172] (0xc000306320) (5) Data frame handling\nI0530 00:44:38.222612 1978 log.go:172] (0xc0009e8dc0) Data frame received for 1\nI0530 00:44:38.222639 1978 log.go:172] (0xc000bb81e0) (1) Data frame handling\nI0530 00:44:38.222664 1978 log.go:172] (0xc000bb81e0) (1) Data frame sent\nI0530 00:44:38.222696 1978 log.go:172] (0xc0009e8dc0) (0xc000bb81e0) Stream removed, broadcasting: 1\nI0530 00:44:38.222720 1978 log.go:172] (0xc0009e8dc0) Go away received\nI0530 00:44:38.223150 1978 log.go:172] (0xc0009e8dc0) (0xc000bb81e0) Stream removed, broadcasting: 1\nI0530 00:44:38.223170 1978 log.go:172] (0xc0009e8dc0) (0xc0005bcf00) Stream removed, broadcasting: 3\nI0530 00:44:38.223180 1978 log.go:172] (0xc0009e8dc0) (0xc000306320) Stream removed, broadcasting: 5\n" May 30 00:44:38.228: INFO: stdout: "affinity-clusterip-timeout-rbb9r" May 30 00:44:53.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5568 execpod-affinitynxzlb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.107.57.158:80/' May 30 00:44:53.491: INFO: stderr: "I0530 00:44:53.369447 1999 log.go:172] (0xc000997550) (0xc0009d4500) Create stream\nI0530 00:44:53.369511 1999 log.go:172] (0xc000997550) (0xc0009d4500) Stream added, broadcasting: 1\nI0530 00:44:53.380503 1999 log.go:172] (0xc000997550) Reply frame received for 1\nI0530 00:44:53.380556 1999 log.go:172] (0xc000997550) (0xc000656dc0) Create stream\nI0530 00:44:53.380570 1999 log.go:172] (0xc000997550) (0xc000656dc0) Stream added, broadcasting: 3\nI0530 00:44:53.386765 1999 log.go:172] (0xc000997550) Reply frame received for 3\nI0530 00:44:53.386796 1999 log.go:172] (0xc000997550) (0xc000546320) Create stream\nI0530 00:44:53.386808 1999 log.go:172] (0xc000997550) (0xc000546320) Stream added, broadcasting: 5\nI0530 00:44:53.389264 1999 log.go:172] (0xc000997550) Reply frame received for 5\nI0530 00:44:53.477417 1999 log.go:172] (0xc000997550) Data frame received for 5\nI0530 00:44:53.477452 1999 log.go:172] (0xc000546320) (5) Data frame handling\nI0530 00:44:53.477473 1999 log.go:172] (0xc000546320) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.57.158:80/\nI0530 00:44:53.482700 1999 log.go:172] (0xc000997550) Data frame received for 3\nI0530 00:44:53.482719 1999 log.go:172] (0xc000656dc0) (3) Data frame handling\nI0530 00:44:53.482734 1999 log.go:172] (0xc000656dc0) (3) Data frame sent\nI0530 00:44:53.483421 1999 log.go:172] (0xc000997550) Data frame received for 3\nI0530 00:44:53.483444 1999 log.go:172] (0xc000656dc0) (3) Data frame handling\nI0530 00:44:53.483473 1999 log.go:172] (0xc000997550) Data frame received for 5\nI0530 00:44:53.483496 1999 log.go:172] (0xc000546320) (5) Data frame handling\nI0530 00:44:53.485793 1999 log.go:172] (0xc000997550) Data frame received for 1\nI0530 00:44:53.485819 1999 log.go:172] (0xc0009d4500) (1) Data frame handling\nI0530 00:44:53.485844 1999 log.go:172] (0xc0009d4500) (1) Data frame sent\nI0530 00:44:53.485867 1999 log.go:172] (0xc000997550) (0xc0009d4500) Stream removed, broadcasting: 1\nI0530 00:44:53.485985 1999 log.go:172] (0xc000997550) Go away received\nI0530 00:44:53.486455 1999 log.go:172] (0xc000997550) (0xc0009d4500) Stream removed, broadcasting: 1\nI0530 00:44:53.486497 1999 log.go:172] (0xc000997550) (0xc000656dc0) Stream removed, broadcasting: 3\nI0530 00:44:53.486519 1999 log.go:172] (0xc000997550) (0xc000546320) Stream removed, broadcasting: 5\n" May 30 00:44:53.491: INFO: stdout: "affinity-clusterip-timeout-kg9nq" May 30 00:44:53.491: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-5568, will wait for the garbage collector to delete the pods May 30 00:44:53.607: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.660219ms May 30 00:44:54.407: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 800.241937ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:45:05.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5568" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:54.502 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":182,"skipped":2867,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:45:05.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:45:05.503: INFO: Waiting up to 5m0s for pod "downwardapi-volume-588defdd-90ad-41a0-a11c-236871ee9169" in namespace "downward-api-8690" to be "Succeeded or Failed" May 30 00:45:05.506: INFO: Pod "downwardapi-volume-588defdd-90ad-41a0-a11c-236871ee9169": Phase="Pending", Reason="", readiness=false. Elapsed: 2.993503ms May 30 00:45:07.512: INFO: Pod "downwardapi-volume-588defdd-90ad-41a0-a11c-236871ee9169": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009302903s May 30 00:45:09.517: INFO: Pod "downwardapi-volume-588defdd-90ad-41a0-a11c-236871ee9169": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01452014s STEP: Saw pod success May 30 00:45:09.517: INFO: Pod "downwardapi-volume-588defdd-90ad-41a0-a11c-236871ee9169" satisfied condition "Succeeded or Failed" May 30 00:45:09.520: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-588defdd-90ad-41a0-a11c-236871ee9169 container client-container: STEP: delete the pod May 30 00:45:09.634: INFO: Waiting for pod downwardapi-volume-588defdd-90ad-41a0-a11c-236871ee9169 to disappear May 30 00:45:09.644: INFO: Pod downwardapi-volume-588defdd-90ad-41a0-a11c-236871ee9169 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:45:09.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8690" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":183,"skipped":2882,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:45:09.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-110 STEP: creating a selector STEP: Creating the service pods in kubernetes May 30 00:45:09.715: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 30 00:45:09.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 30 00:45:11.944: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 30 00:45:13.841: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:45:15.841: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:45:17.841: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:45:19.842: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:45:21.841: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:45:23.841: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:45:25.843: INFO: The status of Pod netserver-0 is Running (Ready = true) May 30 00:45:25.849: INFO: The status of Pod netserver-1 is Running (Ready = false) May 30 00:45:27.854: INFO: The status of Pod netserver-1 is Running (Ready = false) May 30 00:45:29.860: INFO: The status of Pod netserver-1 is Running (Ready = false) May 30 00:45:31.853: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 30 00:45:37.960: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.191 8081 | grep -v '^\s*$'] Namespace:pod-network-test-110 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:45:37.960: INFO: >>> kubeConfig: /root/.kube/config I0530 00:45:37.997654 7 log.go:172] (0xc002b289a0) (0xc0011f0aa0) Create stream I0530 00:45:37.997712 7 log.go:172] (0xc002b289a0) (0xc0011f0aa0) Stream added, broadcasting: 1 I0530 00:45:38.000445 7 log.go:172] (0xc002b289a0) Reply frame received for 1 I0530 00:45:38.000485 7 log.go:172] (0xc002b289a0) (0xc0018bfae0) Create stream I0530 00:45:38.000501 7 log.go:172] (0xc002b289a0) (0xc0018bfae0) Stream added, broadcasting: 3 I0530 00:45:38.001916 7 log.go:172] (0xc002b289a0) Reply frame received for 3 I0530 00:45:38.001990 7 log.go:172] (0xc002b289a0) (0xc00151a320) Create stream I0530 00:45:38.002010 7 log.go:172] (0xc002b289a0) (0xc00151a320) Stream added, broadcasting: 5 I0530 00:45:38.002983 7 log.go:172] (0xc002b289a0) Reply frame received for 5 I0530 00:45:39.088392 7 log.go:172] (0xc002b289a0) Data frame received for 3 I0530 00:45:39.088447 7 log.go:172] (0xc0018bfae0) (3) Data frame handling I0530 00:45:39.088467 7 log.go:172] (0xc0018bfae0) (3) Data frame sent I0530 00:45:39.088482 7 log.go:172] (0xc002b289a0) Data frame received for 3 I0530 00:45:39.088510 7 log.go:172] (0xc0018bfae0) (3) Data frame handling I0530 00:45:39.088608 7 log.go:172] (0xc002b289a0) Data frame received for 5 I0530 00:45:39.088630 7 log.go:172] (0xc00151a320) (5) Data frame handling I0530 00:45:39.091716 7 log.go:172] (0xc002b289a0) Data frame received for 1 I0530 00:45:39.091766 7 log.go:172] (0xc0011f0aa0) (1) Data frame handling I0530 00:45:39.091817 7 log.go:172] (0xc0011f0aa0) (1) Data frame sent I0530 00:45:39.091844 7 log.go:172] (0xc002b289a0) (0xc0011f0aa0) Stream removed, broadcasting: 1 I0530 00:45:39.091871 7 log.go:172] (0xc002b289a0) Go away received I0530 00:45:39.091960 7 log.go:172] (0xc002b289a0) (0xc0011f0aa0) Stream removed, broadcasting: 1 I0530 00:45:39.091999 7 log.go:172] (0xc002b289a0) (0xc0018bfae0) Stream removed, broadcasting: 3 I0530 00:45:39.092020 7 log.go:172] (0xc002b289a0) (0xc00151a320) Stream removed, broadcasting: 5 May 30 00:45:39.092: INFO: Found all expected endpoints: [netserver-0] May 30 00:45:39.096: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.174 8081 | grep -v '^\s*$'] Namespace:pod-network-test-110 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:45:39.096: INFO: >>> kubeConfig: /root/.kube/config I0530 00:45:39.129081 7 log.go:172] (0xc002f6e370) (0xc00151aa00) Create stream I0530 00:45:39.129456 7 log.go:172] (0xc002f6e370) (0xc00151aa00) Stream added, broadcasting: 1 I0530 00:45:39.131795 7 log.go:172] (0xc002f6e370) Reply frame received for 1 I0530 00:45:39.131832 7 log.go:172] (0xc002f6e370) (0xc0018bfea0) Create stream I0530 00:45:39.131840 7 log.go:172] (0xc002f6e370) (0xc0018bfea0) Stream added, broadcasting: 3 I0530 00:45:39.132647 7 log.go:172] (0xc002f6e370) Reply frame received for 3 I0530 00:45:39.132689 7 log.go:172] (0xc002f6e370) (0xc0011f0b40) Create stream I0530 00:45:39.132781 7 log.go:172] (0xc002f6e370) (0xc0011f0b40) Stream added, broadcasting: 5 I0530 00:45:39.133863 7 log.go:172] (0xc002f6e370) Reply frame received for 5 I0530 00:45:40.214218 7 log.go:172] (0xc002f6e370) Data frame received for 5 I0530 00:45:40.214262 7 log.go:172] (0xc002f6e370) Data frame received for 3 I0530 00:45:40.214289 7 log.go:172] (0xc0018bfea0) (3) Data frame handling I0530 00:45:40.214319 7 log.go:172] (0xc0018bfea0) (3) Data frame sent I0530 00:45:40.214352 7 log.go:172] (0xc0011f0b40) (5) Data frame handling I0530 00:45:40.214425 7 log.go:172] (0xc002f6e370) Data frame received for 3 I0530 00:45:40.214470 7 log.go:172] (0xc0018bfea0) (3) Data frame handling I0530 00:45:40.216308 7 log.go:172] (0xc002f6e370) Data frame received for 1 I0530 00:45:40.216335 7 log.go:172] (0xc00151aa00) (1) Data frame handling I0530 00:45:40.216362 7 log.go:172] (0xc00151aa00) (1) Data frame sent I0530 00:45:40.216382 7 log.go:172] (0xc002f6e370) (0xc00151aa00) Stream removed, broadcasting: 1 I0530 00:45:40.216401 7 log.go:172] (0xc002f6e370) Go away received I0530 00:45:40.216598 7 log.go:172] (0xc002f6e370) (0xc00151aa00) Stream removed, broadcasting: 1 I0530 00:45:40.216630 7 log.go:172] (0xc002f6e370) (0xc0018bfea0) Stream removed, broadcasting: 3 I0530 00:45:40.216646 7 log.go:172] (0xc002f6e370) (0xc0011f0b40) Stream removed, broadcasting: 5 May 30 00:45:40.216: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:45:40.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-110" for this suite. • [SLOW TEST:30.574 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":184,"skipped":2917,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:45:40.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-2c25afb3-66ae-4a02-bcb1-894607789a02 STEP: Creating a pod to test consume configMaps May 30 00:45:40.358: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a24deb1-c85f-45e5-950f-72a369c3b590" in namespace "projected-8592" to be "Succeeded or Failed" May 30 00:45:40.396: INFO: Pod "pod-projected-configmaps-7a24deb1-c85f-45e5-950f-72a369c3b590": Phase="Pending", Reason="", readiness=false. Elapsed: 37.619087ms May 30 00:45:42.579: INFO: Pod "pod-projected-configmaps-7a24deb1-c85f-45e5-950f-72a369c3b590": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220899401s May 30 00:45:44.584: INFO: Pod "pod-projected-configmaps-7a24deb1-c85f-45e5-950f-72a369c3b590": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.225688903s STEP: Saw pod success May 30 00:45:44.584: INFO: Pod "pod-projected-configmaps-7a24deb1-c85f-45e5-950f-72a369c3b590" satisfied condition "Succeeded or Failed" May 30 00:45:44.588: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-7a24deb1-c85f-45e5-950f-72a369c3b590 container projected-configmap-volume-test: STEP: delete the pod May 30 00:45:44.711: INFO: Waiting for pod pod-projected-configmaps-7a24deb1-c85f-45e5-950f-72a369c3b590 to disappear May 30 00:45:44.713: INFO: Pod pod-projected-configmaps-7a24deb1-c85f-45e5-950f-72a369c3b590 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:45:44.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8592" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":185,"skipped":2919,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:45:44.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:45:44.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1883" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":186,"skipped":2967,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:45:44.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-7vsc STEP: Creating a pod to test atomic-volume-subpath May 30 00:45:45.128: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7vsc" in namespace "subpath-6155" to be "Succeeded or Failed" May 30 00:45:45.146: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.288708ms May 30 00:45:47.238: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110345176s May 30 00:45:49.243: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115107819s May 30 00:45:51.247: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Running", Reason="", readiness=true. Elapsed: 6.119351562s May 30 00:45:53.269: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Running", Reason="", readiness=true. Elapsed: 8.141472236s May 30 00:45:55.272: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Running", Reason="", readiness=true. Elapsed: 10.144412753s May 30 00:45:57.276: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Running", Reason="", readiness=true. Elapsed: 12.148146789s May 30 00:45:59.280: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Running", Reason="", readiness=true. Elapsed: 14.152620057s May 30 00:46:01.285: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Running", Reason="", readiness=true. Elapsed: 16.157316857s May 30 00:46:03.292: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Running", Reason="", readiness=true. Elapsed: 18.163880659s May 30 00:46:05.296: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Running", Reason="", readiness=true. Elapsed: 20.16834851s May 30 00:46:07.300: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Running", Reason="", readiness=true. Elapsed: 22.17239192s May 30 00:46:09.305: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Running", Reason="", readiness=true. Elapsed: 24.177414784s May 30 00:46:11.310: INFO: Pod "pod-subpath-test-downwardapi-7vsc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.182340693s STEP: Saw pod success May 30 00:46:11.310: INFO: Pod "pod-subpath-test-downwardapi-7vsc" satisfied condition "Succeeded or Failed" May 30 00:46:11.314: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-7vsc container test-container-subpath-downwardapi-7vsc: STEP: delete the pod May 30 00:46:11.358: INFO: Waiting for pod pod-subpath-test-downwardapi-7vsc to disappear May 30 00:46:11.387: INFO: Pod pod-subpath-test-downwardapi-7vsc no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-7vsc May 30 00:46:11.387: INFO: Deleting pod "pod-subpath-test-downwardapi-7vsc" in namespace "subpath-6155" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:46:11.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6155" for this suite. • [SLOW TEST:26.430 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":187,"skipped":2981,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:46:11.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 30 00:46:11.468: INFO: Waiting up to 5m0s for pod "downward-api-94c1d054-e66b-4be1-859d-01857eba4880" in namespace "downward-api-2863" to be "Succeeded or Failed" May 30 00:46:11.537: INFO: Pod "downward-api-94c1d054-e66b-4be1-859d-01857eba4880": Phase="Pending", Reason="", readiness=false. Elapsed: 69.779308ms May 30 00:46:13.541: INFO: Pod "downward-api-94c1d054-e66b-4be1-859d-01857eba4880": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073824081s May 30 00:46:15.545: INFO: Pod "downward-api-94c1d054-e66b-4be1-859d-01857eba4880": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077595516s STEP: Saw pod success May 30 00:46:15.545: INFO: Pod "downward-api-94c1d054-e66b-4be1-859d-01857eba4880" satisfied condition "Succeeded or Failed" May 30 00:46:15.548: INFO: Trying to get logs from node latest-worker2 pod downward-api-94c1d054-e66b-4be1-859d-01857eba4880 container dapi-container: STEP: delete the pod May 30 00:46:15.708: INFO: Waiting for pod downward-api-94c1d054-e66b-4be1-859d-01857eba4880 to disappear May 30 00:46:15.819: INFO: Pod downward-api-94c1d054-e66b-4be1-859d-01857eba4880 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:46:15.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2863" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":188,"skipped":2986,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:46:15.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-d64b49c0-a0ed-4187-a2e8-625a3c680f2c STEP: Creating secret with name secret-projected-all-test-volume-514e64c8-db9a-48fd-a93b-d074c4c4a2e6 STEP: Creating a pod to test Check all projections for projected volume plugin May 30 00:46:15.983: INFO: Waiting up to 5m0s for pod "projected-volume-da68df18-39d6-4896-9d42-ce5a24c30462" in namespace "projected-8860" to be "Succeeded or Failed" May 30 00:46:16.036: INFO: Pod "projected-volume-da68df18-39d6-4896-9d42-ce5a24c30462": Phase="Pending", Reason="", readiness=false. Elapsed: 52.4226ms May 30 00:46:18.130: INFO: Pod "projected-volume-da68df18-39d6-4896-9d42-ce5a24c30462": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146658667s May 30 00:46:20.135: INFO: Pod "projected-volume-da68df18-39d6-4896-9d42-ce5a24c30462": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151387891s STEP: Saw pod success May 30 00:46:20.135: INFO: Pod "projected-volume-da68df18-39d6-4896-9d42-ce5a24c30462" satisfied condition "Succeeded or Failed" May 30 00:46:20.138: INFO: Trying to get logs from node latest-worker2 pod projected-volume-da68df18-39d6-4896-9d42-ce5a24c30462 container projected-all-volume-test: STEP: delete the pod May 30 00:46:20.250: INFO: Waiting for pod projected-volume-da68df18-39d6-4896-9d42-ce5a24c30462 to disappear May 30 00:46:20.253: INFO: Pod projected-volume-da68df18-39d6-4896-9d42-ce5a24c30462 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:46:20.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8860" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":189,"skipped":3010,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:46:20.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:46:20.315: INFO: Creating ReplicaSet my-hostname-basic-296c168c-f567-43ba-a514-662862bb3529 May 30 00:46:20.336: INFO: Pod name my-hostname-basic-296c168c-f567-43ba-a514-662862bb3529: Found 0 pods out of 1 May 30 00:46:25.339: INFO: Pod name my-hostname-basic-296c168c-f567-43ba-a514-662862bb3529: Found 1 pods out of 1 May 30 00:46:25.339: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-296c168c-f567-43ba-a514-662862bb3529" is running May 30 00:46:25.361: INFO: Pod "my-hostname-basic-296c168c-f567-43ba-a514-662862bb3529-pknx4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 00:46:20 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 00:46:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 00:46:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 00:46:20 +0000 UTC Reason: Message:}]) May 30 00:46:25.362: INFO: Trying to dial the pod May 30 00:46:30.398: INFO: Controller my-hostname-basic-296c168c-f567-43ba-a514-662862bb3529: Got expected result from replica 1 [my-hostname-basic-296c168c-f567-43ba-a514-662862bb3529-pknx4]: "my-hostname-basic-296c168c-f567-43ba-a514-662862bb3529-pknx4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:46:30.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3225" for this suite. • [SLOW TEST:10.144 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":190,"skipped":3023,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:46:30.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 30 00:46:30.465: INFO: >>> kubeConfig: /root/.kube/config May 30 00:46:33.406: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:46:43.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1604" for this suite. • [SLOW TEST:13.209 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":191,"skipped":3046,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:46:43.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 30 00:46:43.686: INFO: Waiting up to 5m0s for pod "var-expansion-1f66181b-fe87-44c2-9686-8154ce6fcbae" in namespace "var-expansion-4705" to be "Succeeded or Failed" May 30 00:46:43.689: INFO: Pod "var-expansion-1f66181b-fe87-44c2-9686-8154ce6fcbae": Phase="Pending", Reason="", readiness=false. Elapsed: 3.536965ms May 30 00:46:45.694: INFO: Pod "var-expansion-1f66181b-fe87-44c2-9686-8154ce6fcbae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008368578s May 30 00:46:47.699: INFO: Pod "var-expansion-1f66181b-fe87-44c2-9686-8154ce6fcbae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013505594s STEP: Saw pod success May 30 00:46:47.699: INFO: Pod "var-expansion-1f66181b-fe87-44c2-9686-8154ce6fcbae" satisfied condition "Succeeded or Failed" May 30 00:46:47.703: INFO: Trying to get logs from node latest-worker2 pod var-expansion-1f66181b-fe87-44c2-9686-8154ce6fcbae container dapi-container: STEP: delete the pod May 30 00:46:47.851: INFO: Waiting for pod var-expansion-1f66181b-fe87-44c2-9686-8154ce6fcbae to disappear May 30 00:46:47.915: INFO: Pod var-expansion-1f66181b-fe87-44c2-9686-8154ce6fcbae no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:46:47.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4705" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":192,"skipped":3046,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:46:47.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:46:47.998: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 30 00:46:50.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1755 create -f -' May 30 00:46:53.334: INFO: stderr: "" May 30 00:46:53.334: INFO: stdout: "e2e-test-crd-publish-openapi-9016-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 30 00:46:53.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1755 delete e2e-test-crd-publish-openapi-9016-crds test-foo' May 30 00:46:53.462: INFO: stderr: "" May 30 00:46:53.462: INFO: stdout: "e2e-test-crd-publish-openapi-9016-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 30 00:46:53.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1755 apply -f -' May 30 00:46:56.838: INFO: stderr: "" May 30 00:46:56.838: INFO: stdout: "e2e-test-crd-publish-openapi-9016-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 30 00:46:56.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1755 delete e2e-test-crd-publish-openapi-9016-crds test-foo' May 30 00:46:56.963: INFO: stderr: "" May 30 00:46:56.963: INFO: stdout: "e2e-test-crd-publish-openapi-9016-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 30 00:46:56.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1755 create -f -' May 30 00:46:59.893: INFO: rc: 1 May 30 00:46:59.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1755 apply -f -' May 30 00:47:03.350: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 30 00:47:03.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1755 create -f -' May 30 00:47:05.000: INFO: rc: 1 May 30 00:47:05.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1755 apply -f -' May 30 00:47:05.762: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 30 00:47:05.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9016-crds' May 30 00:47:06.024: INFO: stderr: "" May 30 00:47:06.024: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9016-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 30 00:47:06.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9016-crds.metadata' May 30 00:47:06.302: INFO: stderr: "" May 30 00:47:06.302: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9016-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 30 00:47:06.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9016-crds.spec' May 30 00:47:06.567: INFO: stderr: "" May 30 00:47:06.567: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9016-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 30 00:47:06.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9016-crds.spec.bars' May 30 00:47:06.873: INFO: stderr: "" May 30 00:47:06.873: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9016-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 30 00:47:06.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9016-crds.spec.bars2' May 30 00:47:07.155: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:47:10.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1755" for this suite. • [SLOW TEST:22.130 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":193,"skipped":3057,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:47:10.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 30 00:47:10.146: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:47:17.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6639" for this suite. • [SLOW TEST:7.482 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":194,"skipped":3059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:47:17.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 30 00:47:17.658: INFO: Waiting up to 5m0s for pod "var-expansion-2e1921cf-a2fb-4f68-8ded-6a87eec938c2" in namespace "var-expansion-6197" to be "Succeeded or Failed" May 30 00:47:17.696: INFO: Pod "var-expansion-2e1921cf-a2fb-4f68-8ded-6a87eec938c2": Phase="Pending", Reason="", readiness=false. Elapsed: 37.72193ms May 30 00:47:19.745: INFO: Pod "var-expansion-2e1921cf-a2fb-4f68-8ded-6a87eec938c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087331866s May 30 00:47:21.762: INFO: Pod "var-expansion-2e1921cf-a2fb-4f68-8ded-6a87eec938c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104323011s STEP: Saw pod success May 30 00:47:21.762: INFO: Pod "var-expansion-2e1921cf-a2fb-4f68-8ded-6a87eec938c2" satisfied condition "Succeeded or Failed" May 30 00:47:21.765: INFO: Trying to get logs from node latest-worker pod var-expansion-2e1921cf-a2fb-4f68-8ded-6a87eec938c2 container dapi-container: STEP: delete the pod May 30 00:47:21.794: INFO: Waiting for pod var-expansion-2e1921cf-a2fb-4f68-8ded-6a87eec938c2 to disappear May 30 00:47:21.939: INFO: Pod var-expansion-2e1921cf-a2fb-4f68-8ded-6a87eec938c2 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:47:21.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6197" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":195,"skipped":3106,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:47:21.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 30 00:47:22.148: INFO: Waiting up to 5m0s for pod "var-expansion-65007473-7599-40b3-83c4-00f9b84f06b9" in namespace "var-expansion-2969" to be "Succeeded or Failed" May 30 00:47:22.151: INFO: Pod "var-expansion-65007473-7599-40b3-83c4-00f9b84f06b9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.594293ms May 30 00:47:24.156: INFO: Pod "var-expansion-65007473-7599-40b3-83c4-00f9b84f06b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007991188s May 30 00:47:26.167: INFO: Pod "var-expansion-65007473-7599-40b3-83c4-00f9b84f06b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019193927s STEP: Saw pod success May 30 00:47:26.167: INFO: Pod "var-expansion-65007473-7599-40b3-83c4-00f9b84f06b9" satisfied condition "Succeeded or Failed" May 30 00:47:26.170: INFO: Trying to get logs from node latest-worker pod var-expansion-65007473-7599-40b3-83c4-00f9b84f06b9 container dapi-container: STEP: delete the pod May 30 00:47:26.220: INFO: Waiting for pod var-expansion-65007473-7599-40b3-83c4-00f9b84f06b9 to disappear May 30 00:47:26.230: INFO: Pod var-expansion-65007473-7599-40b3-83c4-00f9b84f06b9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:47:26.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2969" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":196,"skipped":3125,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:47:26.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:47:37.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2311" for this suite. • [SLOW TEST:11.383 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":197,"skipped":3139,"failed":0} SS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:47:37.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-7196 STEP: creating replication controller nodeport-test in namespace services-7196 I0530 00:47:37.760585 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7196, replica count: 2 I0530 00:47:40.811073 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:47:43.811305 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 00:47:43.811: INFO: Creating new exec pod May 30 00:47:48.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7196 execpodc9nmg -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 30 00:47:49.102: INFO: stderr: "I0530 00:47:48.978739 2308 log.go:172] (0xc00095b340) (0xc000a16640) Create stream\nI0530 00:47:48.978816 2308 log.go:172] (0xc00095b340) (0xc000a16640) Stream added, broadcasting: 1\nI0530 00:47:48.984470 2308 log.go:172] (0xc00095b340) Reply frame received for 1\nI0530 00:47:48.984528 2308 log.go:172] (0xc00095b340) (0xc00080c500) Create stream\nI0530 00:47:48.984547 2308 log.go:172] (0xc00095b340) (0xc00080c500) Stream added, broadcasting: 3\nI0530 00:47:48.985870 2308 log.go:172] (0xc00095b340) Reply frame received for 3\nI0530 00:47:48.985921 2308 log.go:172] (0xc00095b340) (0xc00080cdc0) Create stream\nI0530 00:47:48.985938 2308 log.go:172] (0xc00095b340) (0xc00080cdc0) Stream added, broadcasting: 5\nI0530 00:47:48.986847 2308 log.go:172] (0xc00095b340) Reply frame received for 5\nI0530 00:47:49.093055 2308 log.go:172] (0xc00095b340) Data frame received for 5\nI0530 00:47:49.093090 2308 log.go:172] (0xc00080cdc0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nI0530 00:47:49.093296 2308 log.go:172] (0xc00080cdc0) (5) Data frame sent\nI0530 00:47:49.093524 2308 log.go:172] (0xc00095b340) Data frame received for 5\nI0530 00:47:49.093551 2308 log.go:172] (0xc00080cdc0) (5) Data frame handling\nI0530 00:47:49.093582 2308 log.go:172] (0xc00080cdc0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0530 00:47:49.093898 2308 log.go:172] (0xc00095b340) Data frame received for 3\nI0530 00:47:49.093921 2308 log.go:172] (0xc00080c500) (3) Data frame handling\nI0530 00:47:49.094060 2308 log.go:172] (0xc00095b340) Data frame received for 5\nI0530 00:47:49.094077 2308 log.go:172] (0xc00080cdc0) (5) Data frame handling\nI0530 00:47:49.096125 2308 log.go:172] (0xc00095b340) Data frame received for 1\nI0530 00:47:49.096151 2308 log.go:172] (0xc000a16640) (1) Data frame handling\nI0530 00:47:49.096185 2308 log.go:172] (0xc000a16640) (1) Data frame sent\nI0530 00:47:49.096225 2308 log.go:172] (0xc00095b340) (0xc000a16640) Stream removed, broadcasting: 1\nI0530 00:47:49.096253 2308 log.go:172] (0xc00095b340) Go away received\nI0530 00:47:49.096717 2308 log.go:172] (0xc00095b340) (0xc000a16640) Stream removed, broadcasting: 1\nI0530 00:47:49.096763 2308 log.go:172] (0xc00095b340) (0xc00080c500) Stream removed, broadcasting: 3\nI0530 00:47:49.096779 2308 log.go:172] (0xc00095b340) (0xc00080cdc0) Stream removed, broadcasting: 5\n" May 30 00:47:49.103: INFO: stdout: "" May 30 00:47:49.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7196 execpodc9nmg -- /bin/sh -x -c nc -zv -t -w 2 10.98.120.135 80' May 30 00:47:49.327: INFO: stderr: "I0530 00:47:49.241923 2331 log.go:172] (0xc000678840) (0xc000541f40) Create stream\nI0530 00:47:49.241981 2331 log.go:172] (0xc000678840) (0xc000541f40) Stream added, broadcasting: 1\nI0530 00:47:49.243978 2331 log.go:172] (0xc000678840) Reply frame received for 1\nI0530 00:47:49.244016 2331 log.go:172] (0xc000678840) (0xc00069c500) Create stream\nI0530 00:47:49.244031 2331 log.go:172] (0xc000678840) (0xc00069c500) Stream added, broadcasting: 3\nI0530 00:47:49.244703 2331 log.go:172] (0xc000678840) Reply frame received for 3\nI0530 00:47:49.244740 2331 log.go:172] (0xc000678840) (0xc0006a2dc0) Create stream\nI0530 00:47:49.244748 2331 log.go:172] (0xc000678840) (0xc0006a2dc0) Stream added, broadcasting: 5\nI0530 00:47:49.245675 2331 log.go:172] (0xc000678840) Reply frame received for 5\nI0530 00:47:49.319263 2331 log.go:172] (0xc000678840) Data frame received for 3\nI0530 00:47:49.319304 2331 log.go:172] (0xc00069c500) (3) Data frame handling\nI0530 00:47:49.319329 2331 log.go:172] (0xc000678840) Data frame received for 5\nI0530 00:47:49.319343 2331 log.go:172] (0xc0006a2dc0) (5) Data frame handling\nI0530 00:47:49.319355 2331 log.go:172] (0xc0006a2dc0) (5) Data frame sent\nI0530 00:47:49.319363 2331 log.go:172] (0xc000678840) Data frame received for 5\nI0530 00:47:49.319370 2331 log.go:172] (0xc0006a2dc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.120.135 80\nConnection to 10.98.120.135 80 port [tcp/http] succeeded!\nI0530 00:47:49.320778 2331 log.go:172] (0xc000678840) Data frame received for 1\nI0530 00:47:49.320805 2331 log.go:172] (0xc000541f40) (1) Data frame handling\nI0530 00:47:49.320820 2331 log.go:172] (0xc000541f40) (1) Data frame sent\nI0530 00:47:49.320842 2331 log.go:172] (0xc000678840) (0xc000541f40) Stream removed, broadcasting: 1\nI0530 00:47:49.320984 2331 log.go:172] (0xc000678840) Go away received\nI0530 00:47:49.321358 2331 log.go:172] (0xc000678840) (0xc000541f40) Stream removed, broadcasting: 1\nI0530 00:47:49.321384 2331 log.go:172] (0xc000678840) (0xc00069c500) Stream removed, broadcasting: 3\nI0530 00:47:49.321394 2331 log.go:172] (0xc000678840) (0xc0006a2dc0) Stream removed, broadcasting: 5\n" May 30 00:47:49.327: INFO: stdout: "" May 30 00:47:49.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7196 execpodc9nmg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30654' May 30 00:47:49.532: INFO: stderr: "I0530 00:47:49.451228 2352 log.go:172] (0xc000abc840) (0xc000b78280) Create stream\nI0530 00:47:49.451322 2352 log.go:172] (0xc000abc840) (0xc000b78280) Stream added, broadcasting: 1\nI0530 00:47:49.455611 2352 log.go:172] (0xc000abc840) Reply frame received for 1\nI0530 00:47:49.455647 2352 log.go:172] (0xc000abc840) (0xc000698dc0) Create stream\nI0530 00:47:49.455659 2352 log.go:172] (0xc000abc840) (0xc000698dc0) Stream added, broadcasting: 3\nI0530 00:47:49.456468 2352 log.go:172] (0xc000abc840) Reply frame received for 3\nI0530 00:47:49.456504 2352 log.go:172] (0xc000abc840) (0xc000690640) Create stream\nI0530 00:47:49.456515 2352 log.go:172] (0xc000abc840) (0xc000690640) Stream added, broadcasting: 5\nI0530 00:47:49.457448 2352 log.go:172] (0xc000abc840) Reply frame received for 5\nI0530 00:47:49.525833 2352 log.go:172] (0xc000abc840) Data frame received for 5\nI0530 00:47:49.525871 2352 log.go:172] (0xc000690640) (5) Data frame handling\nI0530 00:47:49.525888 2352 log.go:172] (0xc000690640) (5) Data frame sent\nI0530 00:47:49.525899 2352 log.go:172] (0xc000abc840) Data frame received for 5\nI0530 00:47:49.525910 2352 log.go:172] (0xc000690640) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30654\nConnection to 172.17.0.13 30654 port [tcp/30654] succeeded!\nI0530 00:47:49.525952 2352 log.go:172] (0xc000abc840) Data frame received for 3\nI0530 00:47:49.525988 2352 log.go:172] (0xc000698dc0) (3) Data frame handling\nI0530 00:47:49.527570 2352 log.go:172] (0xc000abc840) Data frame received for 1\nI0530 00:47:49.527592 2352 log.go:172] (0xc000b78280) (1) Data frame handling\nI0530 00:47:49.527606 2352 log.go:172] (0xc000b78280) (1) Data frame sent\nI0530 00:47:49.527627 2352 log.go:172] (0xc000abc840) (0xc000b78280) Stream removed, broadcasting: 1\nI0530 00:47:49.527685 2352 log.go:172] (0xc000abc840) Go away received\nI0530 00:47:49.527946 2352 log.go:172] (0xc000abc840) (0xc000b78280) Stream removed, broadcasting: 1\nI0530 00:47:49.527961 2352 log.go:172] (0xc000abc840) (0xc000698dc0) Stream removed, broadcasting: 3\nI0530 00:47:49.527969 2352 log.go:172] (0xc000abc840) (0xc000690640) Stream removed, broadcasting: 5\n" May 30 00:47:49.532: INFO: stdout: "" May 30 00:47:49.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7196 execpodc9nmg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30654' May 30 00:47:49.743: INFO: stderr: "I0530 00:47:49.662729 2372 log.go:172] (0xc000a973f0) (0xc000aa8280) Create stream\nI0530 00:47:49.662786 2372 log.go:172] (0xc000a973f0) (0xc000aa8280) Stream added, broadcasting: 1\nI0530 00:47:49.668310 2372 log.go:172] (0xc000a973f0) Reply frame received for 1\nI0530 00:47:49.668357 2372 log.go:172] (0xc000a973f0) (0xc0006d4500) Create stream\nI0530 00:47:49.668377 2372 log.go:172] (0xc000a973f0) (0xc0006d4500) Stream added, broadcasting: 3\nI0530 00:47:49.669314 2372 log.go:172] (0xc000a973f0) Reply frame received for 3\nI0530 00:47:49.669340 2372 log.go:172] (0xc000a973f0) (0xc0005cc1e0) Create stream\nI0530 00:47:49.669349 2372 log.go:172] (0xc000a973f0) (0xc0005cc1e0) Stream added, broadcasting: 5\nI0530 00:47:49.670265 2372 log.go:172] (0xc000a973f0) Reply frame received for 5\nI0530 00:47:49.735733 2372 log.go:172] (0xc000a973f0) Data frame received for 5\nI0530 00:47:49.735767 2372 log.go:172] (0xc0005cc1e0) (5) Data frame handling\nI0530 00:47:49.735790 2372 log.go:172] (0xc0005cc1e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 30654\nConnection to 172.17.0.12 30654 port [tcp/30654] succeeded!\nI0530 00:47:49.736061 2372 log.go:172] (0xc000a973f0) Data frame received for 5\nI0530 00:47:49.736094 2372 log.go:172] (0xc0005cc1e0) (5) Data frame handling\nI0530 00:47:49.736117 2372 log.go:172] (0xc000a973f0) Data frame received for 3\nI0530 00:47:49.736136 2372 log.go:172] (0xc0006d4500) (3) Data frame handling\nI0530 00:47:49.738110 2372 log.go:172] (0xc000a973f0) Data frame received for 1\nI0530 00:47:49.738142 2372 log.go:172] (0xc000aa8280) (1) Data frame handling\nI0530 00:47:49.738170 2372 log.go:172] (0xc000aa8280) (1) Data frame sent\nI0530 00:47:49.738197 2372 log.go:172] (0xc000a973f0) (0xc000aa8280) Stream removed, broadcasting: 1\nI0530 00:47:49.738254 2372 log.go:172] (0xc000a973f0) Go away received\nI0530 00:47:49.738627 2372 log.go:172] (0xc000a973f0) (0xc000aa8280) Stream removed, broadcasting: 1\nI0530 00:47:49.738650 2372 log.go:172] (0xc000a973f0) (0xc0006d4500) Stream removed, broadcasting: 3\nI0530 00:47:49.738660 2372 log.go:172] (0xc000a973f0) (0xc0005cc1e0) Stream removed, broadcasting: 5\n" May 30 00:47:49.743: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:47:49.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7196" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.128 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":198,"skipped":3141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:47:49.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9103 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-9103 May 30 00:47:49.873: INFO: Found 0 stateful pods, waiting for 1 May 30 00:47:59.878: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 30 00:47:59.919: INFO: Deleting all statefulset in ns statefulset-9103 May 30 00:47:59.934: INFO: Scaling statefulset ss to 0 May 30 00:48:09.974: INFO: Waiting for statefulset status.replicas updated to 0 May 30 00:48:09.977: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:48:09.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9103" for this suite. • [SLOW TEST:20.246 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":199,"skipped":3172,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:48:10.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:48:25.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7966" for this suite. STEP: Destroying namespace "nsdeletetest-9845" for this suite. May 30 00:48:25.299: INFO: Namespace nsdeletetest-9845 was already deleted STEP: Destroying namespace "nsdeletetest-7917" for this suite. • [SLOW TEST:15.302 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":200,"skipped":3175,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:48:25.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:48:36.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4065" for this suite. • [SLOW TEST:11.154 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":201,"skipped":3182,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:48:36.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 30 00:48:40.642: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:48:40.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6960" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":202,"skipped":3185,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:48:40.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 00:48:41.736: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 00:48:43.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396521, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396521, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396521, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396521, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 00:48:46.838: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:48:47.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5073" for this suite. STEP: Destroying namespace "webhook-5073-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.972 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":203,"skipped":3230,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:48:47.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 30 00:48:47.727: INFO: Waiting up to 5m0s for pod "client-containers-d098785d-daa1-412b-b1f3-288a88b6fb53" in namespace "containers-4134" to be "Succeeded or Failed" May 30 00:48:47.744: INFO: Pod "client-containers-d098785d-daa1-412b-b1f3-288a88b6fb53": Phase="Pending", Reason="", readiness=false. Elapsed: 16.810983ms May 30 00:48:49.781: INFO: Pod "client-containers-d098785d-daa1-412b-b1f3-288a88b6fb53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053619386s May 30 00:48:51.804: INFO: Pod "client-containers-d098785d-daa1-412b-b1f3-288a88b6fb53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077093061s STEP: Saw pod success May 30 00:48:51.804: INFO: Pod "client-containers-d098785d-daa1-412b-b1f3-288a88b6fb53" satisfied condition "Succeeded or Failed" May 30 00:48:51.838: INFO: Trying to get logs from node latest-worker2 pod client-containers-d098785d-daa1-412b-b1f3-288a88b6fb53 container test-container: STEP: delete the pod May 30 00:48:51.883: INFO: Waiting for pod client-containers-d098785d-daa1-412b-b1f3-288a88b6fb53 to disappear May 30 00:48:51.912: INFO: Pod client-containers-d098785d-daa1-412b-b1f3-288a88b6fb53 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:48:51.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4134" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":204,"skipped":3237,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:48:51.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 30 00:49:02.221: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6681 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:49:02.221: INFO: >>> kubeConfig: /root/.kube/config I0530 00:49:02.262356 7 log.go:172] (0xc000a6a790) (0xc002134960) Create stream I0530 00:49:02.262392 7 log.go:172] (0xc000a6a790) (0xc002134960) Stream added, broadcasting: 1 I0530 00:49:02.264545 7 log.go:172] (0xc000a6a790) Reply frame received for 1 I0530 00:49:02.264587 7 log.go:172] (0xc000a6a790) (0xc001386780) Create stream I0530 00:49:02.264601 7 log.go:172] (0xc000a6a790) (0xc001386780) Stream added, broadcasting: 3 I0530 00:49:02.265824 7 log.go:172] (0xc000a6a790) Reply frame received for 3 I0530 00:49:02.265856 7 log.go:172] (0xc000a6a790) (0xc002134a00) Create stream I0530 00:49:02.265872 7 log.go:172] (0xc000a6a790) (0xc002134a00) Stream added, broadcasting: 5 I0530 00:49:02.266944 7 log.go:172] (0xc000a6a790) Reply frame received for 5 I0530 00:49:02.335049 7 log.go:172] (0xc000a6a790) Data frame received for 5 I0530 00:49:02.335078 7 log.go:172] (0xc002134a00) (5) Data frame handling I0530 00:49:02.335103 7 log.go:172] (0xc000a6a790) Data frame received for 3 I0530 00:49:02.335120 7 log.go:172] (0xc001386780) (3) Data frame handling I0530 00:49:02.335133 7 log.go:172] (0xc001386780) (3) Data frame sent I0530 00:49:02.335142 7 log.go:172] (0xc000a6a790) Data frame received for 3 I0530 00:49:02.335150 7 log.go:172] (0xc001386780) (3) Data frame handling I0530 00:49:02.336989 7 log.go:172] (0xc000a6a790) Data frame received for 1 I0530 00:49:02.337034 7 log.go:172] (0xc002134960) (1) Data frame handling I0530 00:49:02.337060 7 log.go:172] (0xc002134960) (1) Data frame sent I0530 00:49:02.337085 7 log.go:172] (0xc000a6a790) (0xc002134960) Stream removed, broadcasting: 1 I0530 00:49:02.337450 7 log.go:172] (0xc000a6a790) (0xc002134960) Stream removed, broadcasting: 1 I0530 00:49:02.337483 7 log.go:172] (0xc000a6a790) (0xc001386780) Stream removed, broadcasting: 3 I0530 00:49:02.337508 7 log.go:172] (0xc000a6a790) (0xc002134a00) Stream removed, broadcasting: 5 May 30 00:49:02.337: INFO: Exec stderr: "" May 30 00:49:02.337: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6681 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:49:02.337: INFO: >>> kubeConfig: /root/.kube/config I0530 00:49:02.340713 7 log.go:172] (0xc000a6a790) Go away received I0530 00:49:02.374626 7 log.go:172] (0xc000a6ab00) (0xc002134b40) Create stream I0530 00:49:02.374665 7 log.go:172] (0xc000a6ab00) (0xc002134b40) Stream added, broadcasting: 1 I0530 00:49:02.376430 7 log.go:172] (0xc000a6ab00) Reply frame received for 1 I0530 00:49:02.376464 7 log.go:172] (0xc000a6ab00) (0xc0013ae0a0) Create stream I0530 00:49:02.376476 7 log.go:172] (0xc000a6ab00) (0xc0013ae0a0) Stream added, broadcasting: 3 I0530 00:49:02.377675 7 log.go:172] (0xc000a6ab00) Reply frame received for 3 I0530 00:49:02.377701 7 log.go:172] (0xc000a6ab00) (0xc002134c80) Create stream I0530 00:49:02.377710 7 log.go:172] (0xc000a6ab00) (0xc002134c80) Stream added, broadcasting: 5 I0530 00:49:02.378573 7 log.go:172] (0xc000a6ab00) Reply frame received for 5 I0530 00:49:02.432207 7 log.go:172] (0xc000a6ab00) Data frame received for 5 I0530 00:49:02.432242 7 log.go:172] (0xc000a6ab00) Data frame received for 3 I0530 00:49:02.432276 7 log.go:172] (0xc0013ae0a0) (3) Data frame handling I0530 00:49:02.432293 7 log.go:172] (0xc0013ae0a0) (3) Data frame sent I0530 00:49:02.432308 7 log.go:172] (0xc000a6ab00) Data frame received for 3 I0530 00:49:02.432320 7 log.go:172] (0xc0013ae0a0) (3) Data frame handling I0530 00:49:02.432338 7 log.go:172] (0xc002134c80) (5) Data frame handling I0530 00:49:02.433832 7 log.go:172] (0xc000a6ab00) Data frame received for 1 I0530 00:49:02.433846 7 log.go:172] (0xc002134b40) (1) Data frame handling I0530 00:49:02.433858 7 log.go:172] (0xc002134b40) (1) Data frame sent I0530 00:49:02.433997 7 log.go:172] (0xc000a6ab00) (0xc002134b40) Stream removed, broadcasting: 1 I0530 00:49:02.434061 7 log.go:172] (0xc000a6ab00) (0xc002134b40) Stream removed, broadcasting: 1 I0530 00:49:02.434075 7 log.go:172] (0xc000a6ab00) (0xc0013ae0a0) Stream removed, broadcasting: 3 I0530 00:49:02.434179 7 log.go:172] (0xc000a6ab00) Go away received I0530 00:49:02.434289 7 log.go:172] (0xc000a6ab00) (0xc002134c80) Stream removed, broadcasting: 5 May 30 00:49:02.434: INFO: Exec stderr: "" May 30 00:49:02.434: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6681 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:49:02.434: INFO: >>> kubeConfig: /root/.kube/config I0530 00:49:02.475821 7 log.go:172] (0xc0066f6370) (0xc0013ae5a0) Create stream I0530 00:49:02.475862 7 log.go:172] (0xc0066f6370) (0xc0013ae5a0) Stream added, broadcasting: 1 I0530 00:49:02.477884 7 log.go:172] (0xc0066f6370) Reply frame received for 1 I0530 00:49:02.477935 7 log.go:172] (0xc0066f6370) (0xc001386aa0) Create stream I0530 00:49:02.477948 7 log.go:172] (0xc0066f6370) (0xc001386aa0) Stream added, broadcasting: 3 I0530 00:49:02.478957 7 log.go:172] (0xc0066f6370) Reply frame received for 3 I0530 00:49:02.479000 7 log.go:172] (0xc0066f6370) (0xc001386b40) Create stream I0530 00:49:02.479017 7 log.go:172] (0xc0066f6370) (0xc001386b40) Stream added, broadcasting: 5 I0530 00:49:02.480008 7 log.go:172] (0xc0066f6370) Reply frame received for 5 I0530 00:49:02.549632 7 log.go:172] (0xc0066f6370) Data frame received for 3 I0530 00:49:02.549675 7 log.go:172] (0xc001386aa0) (3) Data frame handling I0530 00:49:02.549689 7 log.go:172] (0xc001386aa0) (3) Data frame sent I0530 00:49:02.549714 7 log.go:172] (0xc0066f6370) Data frame received for 3 I0530 00:49:02.549722 7 log.go:172] (0xc001386aa0) (3) Data frame handling I0530 00:49:02.549758 7 log.go:172] (0xc0066f6370) Data frame received for 5 I0530 00:49:02.549777 7 log.go:172] (0xc001386b40) (5) Data frame handling I0530 00:49:02.550912 7 log.go:172] (0xc0066f6370) Data frame received for 1 I0530 00:49:02.550941 7 log.go:172] (0xc0013ae5a0) (1) Data frame handling I0530 00:49:02.550959 7 log.go:172] (0xc0013ae5a0) (1) Data frame sent I0530 00:49:02.550972 7 log.go:172] (0xc0066f6370) (0xc0013ae5a0) Stream removed, broadcasting: 1 I0530 00:49:02.550982 7 log.go:172] (0xc0066f6370) Go away received I0530 00:49:02.551139 7 log.go:172] (0xc0066f6370) (0xc0013ae5a0) Stream removed, broadcasting: 1 I0530 00:49:02.551160 7 log.go:172] (0xc0066f6370) (0xc001386aa0) Stream removed, broadcasting: 3 I0530 00:49:02.551180 7 log.go:172] (0xc0066f6370) (0xc001386b40) Stream removed, broadcasting: 5 May 30 00:49:02.551: INFO: Exec stderr: "" May 30 00:49:02.551: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6681 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:49:02.551: INFO: >>> kubeConfig: /root/.kube/config I0530 00:49:02.597787 7 log.go:172] (0xc002b29970) (0xc001d19680) Create stream I0530 00:49:02.597835 7 log.go:172] (0xc002b29970) (0xc001d19680) Stream added, broadcasting: 1 I0530 00:49:02.599566 7 log.go:172] (0xc002b29970) Reply frame received for 1 I0530 00:49:02.599598 7 log.go:172] (0xc002b29970) (0xc001386be0) Create stream I0530 00:49:02.599610 7 log.go:172] (0xc002b29970) (0xc001386be0) Stream added, broadcasting: 3 I0530 00:49:02.600669 7 log.go:172] (0xc002b29970) Reply frame received for 3 I0530 00:49:02.600726 7 log.go:172] (0xc002b29970) (0xc002134dc0) Create stream I0530 00:49:02.600743 7 log.go:172] (0xc002b29970) (0xc002134dc0) Stream added, broadcasting: 5 I0530 00:49:02.602210 7 log.go:172] (0xc002b29970) Reply frame received for 5 I0530 00:49:02.662334 7 log.go:172] (0xc002b29970) Data frame received for 5 I0530 00:49:02.662366 7 log.go:172] (0xc002134dc0) (5) Data frame handling I0530 00:49:02.662403 7 log.go:172] (0xc002b29970) Data frame received for 3 I0530 00:49:02.662513 7 log.go:172] (0xc001386be0) (3) Data frame handling I0530 00:49:02.662548 7 log.go:172] (0xc001386be0) (3) Data frame sent I0530 00:49:02.662566 7 log.go:172] (0xc002b29970) Data frame received for 3 I0530 00:49:02.662578 7 log.go:172] (0xc001386be0) (3) Data frame handling I0530 00:49:02.663753 7 log.go:172] (0xc002b29970) Data frame received for 1 I0530 00:49:02.663778 7 log.go:172] (0xc001d19680) (1) Data frame handling I0530 00:49:02.663797 7 log.go:172] (0xc001d19680) (1) Data frame sent I0530 00:49:02.663826 7 log.go:172] (0xc002b29970) (0xc001d19680) Stream removed, broadcasting: 1 I0530 00:49:02.663847 7 log.go:172] (0xc002b29970) Go away received I0530 00:49:02.663948 7 log.go:172] (0xc002b29970) (0xc001d19680) Stream removed, broadcasting: 1 I0530 00:49:02.663977 7 log.go:172] (0xc002b29970) (0xc001386be0) Stream removed, broadcasting: 3 I0530 00:49:02.664003 7 log.go:172] (0xc002b29970) (0xc002134dc0) Stream removed, broadcasting: 5 May 30 00:49:02.664: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 30 00:49:02.664: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6681 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:49:02.664: INFO: >>> kubeConfig: /root/.kube/config I0530 00:49:02.719207 7 log.go:172] (0xc0066f69a0) (0xc0013ae960) Create stream I0530 00:49:02.719238 7 log.go:172] (0xc0066f69a0) (0xc0013ae960) Stream added, broadcasting: 1 I0530 00:49:02.721026 7 log.go:172] (0xc0066f69a0) Reply frame received for 1 I0530 00:49:02.721068 7 log.go:172] (0xc0066f69a0) (0xc001d197c0) Create stream I0530 00:49:02.721087 7 log.go:172] (0xc0066f69a0) (0xc001d197c0) Stream added, broadcasting: 3 I0530 00:49:02.722364 7 log.go:172] (0xc0066f69a0) Reply frame received for 3 I0530 00:49:02.722406 7 log.go:172] (0xc0066f69a0) (0xc001d19860) Create stream I0530 00:49:02.722422 7 log.go:172] (0xc0066f69a0) (0xc001d19860) Stream added, broadcasting: 5 I0530 00:49:02.723529 7 log.go:172] (0xc0066f69a0) Reply frame received for 5 I0530 00:49:02.782552 7 log.go:172] (0xc0066f69a0) Data frame received for 5 I0530 00:49:02.782589 7 log.go:172] (0xc001d19860) (5) Data frame handling I0530 00:49:02.782606 7 log.go:172] (0xc0066f69a0) Data frame received for 3 I0530 00:49:02.782611 7 log.go:172] (0xc001d197c0) (3) Data frame handling I0530 00:49:02.782618 7 log.go:172] (0xc001d197c0) (3) Data frame sent I0530 00:49:02.782624 7 log.go:172] (0xc0066f69a0) Data frame received for 3 I0530 00:49:02.782629 7 log.go:172] (0xc001d197c0) (3) Data frame handling I0530 00:49:02.784181 7 log.go:172] (0xc0066f69a0) Data frame received for 1 I0530 00:49:02.784224 7 log.go:172] (0xc0013ae960) (1) Data frame handling I0530 00:49:02.784285 7 log.go:172] (0xc0013ae960) (1) Data frame sent I0530 00:49:02.784329 7 log.go:172] (0xc0066f69a0) (0xc0013ae960) Stream removed, broadcasting: 1 I0530 00:49:02.784438 7 log.go:172] (0xc0066f69a0) (0xc0013ae960) Stream removed, broadcasting: 1 I0530 00:49:02.784456 7 log.go:172] (0xc0066f69a0) (0xc001d197c0) Stream removed, broadcasting: 3 I0530 00:49:02.784466 7 log.go:172] (0xc0066f69a0) (0xc001d19860) Stream removed, broadcasting: 5 May 30 00:49:02.784: INFO: Exec stderr: "" I0530 00:49:02.784500 7 log.go:172] (0xc0066f69a0) Go away received May 30 00:49:02.784: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6681 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:49:02.784: INFO: >>> kubeConfig: /root/.kube/config I0530 00:49:02.816635 7 log.go:172] (0xc002d389a0) (0xc001387540) Create stream I0530 00:49:02.816670 7 log.go:172] (0xc002d389a0) (0xc001387540) Stream added, broadcasting: 1 I0530 00:49:02.819417 7 log.go:172] (0xc002d389a0) Reply frame received for 1 I0530 00:49:02.819449 7 log.go:172] (0xc002d389a0) (0xc002da2a00) Create stream I0530 00:49:02.819459 7 log.go:172] (0xc002d389a0) (0xc002da2a00) Stream added, broadcasting: 3 I0530 00:49:02.820507 7 log.go:172] (0xc002d389a0) Reply frame received for 3 I0530 00:49:02.820536 7 log.go:172] (0xc002d389a0) (0xc002da2aa0) Create stream I0530 00:49:02.820549 7 log.go:172] (0xc002d389a0) (0xc002da2aa0) Stream added, broadcasting: 5 I0530 00:49:02.821472 7 log.go:172] (0xc002d389a0) Reply frame received for 5 I0530 00:49:02.888550 7 log.go:172] (0xc002d389a0) Data frame received for 5 I0530 00:49:02.888598 7 log.go:172] (0xc002da2aa0) (5) Data frame handling I0530 00:49:02.888647 7 log.go:172] (0xc002d389a0) Data frame received for 3 I0530 00:49:02.888668 7 log.go:172] (0xc002da2a00) (3) Data frame handling I0530 00:49:02.888701 7 log.go:172] (0xc002da2a00) (3) Data frame sent I0530 00:49:02.888807 7 log.go:172] (0xc002d389a0) Data frame received for 3 I0530 00:49:02.888840 7 log.go:172] (0xc002da2a00) (3) Data frame handling I0530 00:49:02.890493 7 log.go:172] (0xc002d389a0) Data frame received for 1 I0530 00:49:02.890527 7 log.go:172] (0xc001387540) (1) Data frame handling I0530 00:49:02.890562 7 log.go:172] (0xc001387540) (1) Data frame sent I0530 00:49:02.890584 7 log.go:172] (0xc002d389a0) (0xc001387540) Stream removed, broadcasting: 1 I0530 00:49:02.890705 7 log.go:172] (0xc002d389a0) (0xc001387540) Stream removed, broadcasting: 1 I0530 00:49:02.890741 7 log.go:172] (0xc002d389a0) (0xc002da2a00) Stream removed, broadcasting: 3 I0530 00:49:02.890871 7 log.go:172] (0xc002d389a0) Go away received I0530 00:49:02.891012 7 log.go:172] (0xc002d389a0) (0xc002da2aa0) Stream removed, broadcasting: 5 May 30 00:49:02.891: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 30 00:49:02.891: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6681 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:49:02.891: INFO: >>> kubeConfig: /root/.kube/config I0530 00:49:02.921544 7 log.go:172] (0xc002f6edc0) (0xc002da2dc0) Create stream I0530 00:49:02.921565 7 log.go:172] (0xc002f6edc0) (0xc002da2dc0) Stream added, broadcasting: 1 I0530 00:49:02.923707 7 log.go:172] (0xc002f6edc0) Reply frame received for 1 I0530 00:49:02.923743 7 log.go:172] (0xc002f6edc0) (0xc001387720) Create stream I0530 00:49:02.923756 7 log.go:172] (0xc002f6edc0) (0xc001387720) Stream added, broadcasting: 3 I0530 00:49:02.924667 7 log.go:172] (0xc002f6edc0) Reply frame received for 3 I0530 00:49:02.924697 7 log.go:172] (0xc002f6edc0) (0xc001d19900) Create stream I0530 00:49:02.924711 7 log.go:172] (0xc002f6edc0) (0xc001d19900) Stream added, broadcasting: 5 I0530 00:49:02.926179 7 log.go:172] (0xc002f6edc0) Reply frame received for 5 I0530 00:49:02.994369 7 log.go:172] (0xc002f6edc0) Data frame received for 5 I0530 00:49:02.994409 7 log.go:172] (0xc001d19900) (5) Data frame handling I0530 00:49:02.994430 7 log.go:172] (0xc002f6edc0) Data frame received for 3 I0530 00:49:02.994547 7 log.go:172] (0xc001387720) (3) Data frame handling I0530 00:49:02.994567 7 log.go:172] (0xc001387720) (3) Data frame sent I0530 00:49:02.994578 7 log.go:172] (0xc002f6edc0) Data frame received for 3 I0530 00:49:02.994586 7 log.go:172] (0xc001387720) (3) Data frame handling I0530 00:49:02.995994 7 log.go:172] (0xc002f6edc0) Data frame received for 1 I0530 00:49:02.996021 7 log.go:172] (0xc002da2dc0) (1) Data frame handling I0530 00:49:02.996039 7 log.go:172] (0xc002da2dc0) (1) Data frame sent I0530 00:49:02.996056 7 log.go:172] (0xc002f6edc0) (0xc002da2dc0) Stream removed, broadcasting: 1 I0530 00:49:02.996076 7 log.go:172] (0xc002f6edc0) Go away received I0530 00:49:02.996290 7 log.go:172] (0xc002f6edc0) (0xc002da2dc0) Stream removed, broadcasting: 1 I0530 00:49:02.996336 7 log.go:172] (0xc002f6edc0) (0xc001387720) Stream removed, broadcasting: 3 I0530 00:49:02.996486 7 log.go:172] (0xc002f6edc0) (0xc001d19900) Stream removed, broadcasting: 5 May 30 00:49:02.996: INFO: Exec stderr: "" May 30 00:49:02.996: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6681 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:49:02.996: INFO: >>> kubeConfig: /root/.kube/config I0530 00:49:03.066008 7 log.go:172] (0xc000a6b290) (0xc002134fa0) Create stream I0530 00:49:03.066040 7 log.go:172] (0xc000a6b290) (0xc002134fa0) Stream added, broadcasting: 1 I0530 00:49:03.067968 7 log.go:172] (0xc000a6b290) Reply frame received for 1 I0530 00:49:03.068022 7 log.go:172] (0xc000a6b290) (0xc002135040) Create stream I0530 00:49:03.068046 7 log.go:172] (0xc000a6b290) (0xc002135040) Stream added, broadcasting: 3 I0530 00:49:03.069292 7 log.go:172] (0xc000a6b290) Reply frame received for 3 I0530 00:49:03.069346 7 log.go:172] (0xc000a6b290) (0xc001387ae0) Create stream I0530 00:49:03.069363 7 log.go:172] (0xc000a6b290) (0xc001387ae0) Stream added, broadcasting: 5 I0530 00:49:03.070633 7 log.go:172] (0xc000a6b290) Reply frame received for 5 I0530 00:49:03.122091 7 log.go:172] (0xc000a6b290) Data frame received for 3 I0530 00:49:03.122144 7 log.go:172] (0xc002135040) (3) Data frame handling I0530 00:49:03.122195 7 log.go:172] (0xc002135040) (3) Data frame sent I0530 00:49:03.122224 7 log.go:172] (0xc000a6b290) Data frame received for 3 I0530 00:49:03.122266 7 log.go:172] (0xc002135040) (3) Data frame handling I0530 00:49:03.122316 7 log.go:172] (0xc000a6b290) Data frame received for 5 I0530 00:49:03.122337 7 log.go:172] (0xc001387ae0) (5) Data frame handling I0530 00:49:03.124391 7 log.go:172] (0xc000a6b290) Data frame received for 1 I0530 00:49:03.124410 7 log.go:172] (0xc002134fa0) (1) Data frame handling I0530 00:49:03.124434 7 log.go:172] (0xc002134fa0) (1) Data frame sent I0530 00:49:03.124457 7 log.go:172] (0xc000a6b290) (0xc002134fa0) Stream removed, broadcasting: 1 I0530 00:49:03.124563 7 log.go:172] (0xc000a6b290) (0xc002134fa0) Stream removed, broadcasting: 1 I0530 00:49:03.124578 7 log.go:172] (0xc000a6b290) (0xc002135040) Stream removed, broadcasting: 3 I0530 00:49:03.124670 7 log.go:172] (0xc000a6b290) Go away received I0530 00:49:03.124792 7 log.go:172] (0xc000a6b290) (0xc001387ae0) Stream removed, broadcasting: 5 May 30 00:49:03.124: INFO: Exec stderr: "" May 30 00:49:03.124: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6681 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:49:03.124: INFO: >>> kubeConfig: /root/.kube/config I0530 00:49:03.155662 7 log.go:172] (0xc0066f6fd0) (0xc0013aefa0) Create stream I0530 00:49:03.155699 7 log.go:172] (0xc0066f6fd0) (0xc0013aefa0) Stream added, broadcasting: 1 I0530 00:49:03.157919 7 log.go:172] (0xc0066f6fd0) Reply frame received for 1 I0530 00:49:03.157984 7 log.go:172] (0xc0066f6fd0) (0xc0013af180) Create stream I0530 00:49:03.158000 7 log.go:172] (0xc0066f6fd0) (0xc0013af180) Stream added, broadcasting: 3 I0530 00:49:03.159016 7 log.go:172] (0xc0066f6fd0) Reply frame received for 3 I0530 00:49:03.159055 7 log.go:172] (0xc0066f6fd0) (0xc0013af360) Create stream I0530 00:49:03.159068 7 log.go:172] (0xc0066f6fd0) (0xc0013af360) Stream added, broadcasting: 5 I0530 00:49:03.159957 7 log.go:172] (0xc0066f6fd0) Reply frame received for 5 I0530 00:49:03.219047 7 log.go:172] (0xc0066f6fd0) Data frame received for 5 I0530 00:49:03.219093 7 log.go:172] (0xc0013af360) (5) Data frame handling I0530 00:49:03.219123 7 log.go:172] (0xc0066f6fd0) Data frame received for 3 I0530 00:49:03.219137 7 log.go:172] (0xc0013af180) (3) Data frame handling I0530 00:49:03.219153 7 log.go:172] (0xc0013af180) (3) Data frame sent I0530 00:49:03.219173 7 log.go:172] (0xc0066f6fd0) Data frame received for 3 I0530 00:49:03.219192 7 log.go:172] (0xc0013af180) (3) Data frame handling I0530 00:49:03.220255 7 log.go:172] (0xc0066f6fd0) Data frame received for 1 I0530 00:49:03.220283 7 log.go:172] (0xc0013aefa0) (1) Data frame handling I0530 00:49:03.220308 7 log.go:172] (0xc0013aefa0) (1) Data frame sent I0530 00:49:03.220350 7 log.go:172] (0xc0066f6fd0) (0xc0013aefa0) Stream removed, broadcasting: 1 I0530 00:49:03.220479 7 log.go:172] (0xc0066f6fd0) (0xc0013aefa0) Stream removed, broadcasting: 1 I0530 00:49:03.220511 7 log.go:172] (0xc0066f6fd0) (0xc0013af180) Stream removed, broadcasting: 3 I0530 00:49:03.220534 7 log.go:172] (0xc0066f6fd0) (0xc0013af360) Stream removed, broadcasting: 5 May 30 00:49:03.220: INFO: Exec stderr: "" May 30 00:49:03.220: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6681 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:49:03.220: INFO: >>> kubeConfig: /root/.kube/config I0530 00:49:03.222696 7 log.go:172] (0xc0066f6fd0) Go away received I0530 00:49:03.250876 7 log.go:172] (0xc002d38fd0) (0xc001387d60) Create stream I0530 00:49:03.250909 7 log.go:172] (0xc002d38fd0) (0xc001387d60) Stream added, broadcasting: 1 I0530 00:49:03.253061 7 log.go:172] (0xc002d38fd0) Reply frame received for 1 I0530 00:49:03.253094 7 log.go:172] (0xc002d38fd0) (0xc001387e00) Create stream I0530 00:49:03.253107 7 log.go:172] (0xc002d38fd0) (0xc001387e00) Stream added, broadcasting: 3 I0530 00:49:03.254385 7 log.go:172] (0xc002d38fd0) Reply frame received for 3 I0530 00:49:03.254426 7 log.go:172] (0xc002d38fd0) (0xc001387ea0) Create stream I0530 00:49:03.254443 7 log.go:172] (0xc002d38fd0) (0xc001387ea0) Stream added, broadcasting: 5 I0530 00:49:03.255366 7 log.go:172] (0xc002d38fd0) Reply frame received for 5 I0530 00:49:03.318303 7 log.go:172] (0xc002d38fd0) Data frame received for 5 I0530 00:49:03.318331 7 log.go:172] (0xc001387ea0) (5) Data frame handling I0530 00:49:03.318357 7 log.go:172] (0xc002d38fd0) Data frame received for 3 I0530 00:49:03.318388 7 log.go:172] (0xc001387e00) (3) Data frame handling I0530 00:49:03.318416 7 log.go:172] (0xc001387e00) (3) Data frame sent I0530 00:49:03.318439 7 log.go:172] (0xc002d38fd0) Data frame received for 3 I0530 00:49:03.318454 7 log.go:172] (0xc001387e00) (3) Data frame handling I0530 00:49:03.320728 7 log.go:172] (0xc002d38fd0) Data frame received for 1 I0530 00:49:03.320757 7 log.go:172] (0xc001387d60) (1) Data frame handling I0530 00:49:03.320771 7 log.go:172] (0xc001387d60) (1) Data frame sent I0530 00:49:03.320807 7 log.go:172] (0xc002d38fd0) (0xc001387d60) Stream removed, broadcasting: 1 I0530 00:49:03.320841 7 log.go:172] (0xc002d38fd0) Go away received I0530 00:49:03.321005 7 log.go:172] (0xc002d38fd0) (0xc001387d60) Stream removed, broadcasting: 1 I0530 00:49:03.321039 7 log.go:172] (0xc002d38fd0) (0xc001387e00) Stream removed, broadcasting: 3 I0530 00:49:03.321064 7 log.go:172] (0xc002d38fd0) (0xc001387ea0) Stream removed, broadcasting: 5 May 30 00:49:03.321: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:49:03.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6681" for this suite. • [SLOW TEST:11.407 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":205,"skipped":3259,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:49:03.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:49:03.450: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:49:04.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2043" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":206,"skipped":3273,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:49:04.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:49:04.199: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 30 00:49:07.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2904 create -f -' May 30 00:49:10.706: INFO: stderr: "" May 30 00:49:10.706: INFO: stdout: "e2e-test-crd-publish-openapi-1423-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 30 00:49:10.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2904 delete e2e-test-crd-publish-openapi-1423-crds test-cr' May 30 00:49:10.836: INFO: stderr: "" May 30 00:49:10.836: INFO: stdout: "e2e-test-crd-publish-openapi-1423-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 30 00:49:10.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2904 apply -f -' May 30 00:49:11.110: INFO: stderr: "" May 30 00:49:11.110: INFO: stdout: "e2e-test-crd-publish-openapi-1423-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 30 00:49:11.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2904 delete e2e-test-crd-publish-openapi-1423-crds test-cr' May 30 00:49:11.234: INFO: stderr: "" May 30 00:49:11.234: INFO: stdout: "e2e-test-crd-publish-openapi-1423-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 30 00:49:11.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1423-crds' May 30 00:49:11.525: INFO: stderr: "" May 30 00:49:11.525: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1423-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:49:13.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2904" for this suite. • [SLOW TEST:9.337 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":207,"skipped":3278,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:49:13.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-783884a1-9e5d-46f0-b317-c5a193ebcf35 STEP: Creating a pod to test consume configMaps May 30 00:49:13.536: INFO: Waiting up to 5m0s for pod "pod-configmaps-4974a2c2-5651-4ce6-b9c9-17d9deb11dd9" in namespace "configmap-7806" to be "Succeeded or Failed" May 30 00:49:13.541: INFO: Pod "pod-configmaps-4974a2c2-5651-4ce6-b9c9-17d9deb11dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.609245ms May 30 00:49:15.546: INFO: Pod "pod-configmaps-4974a2c2-5651-4ce6-b9c9-17d9deb11dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010189426s May 30 00:49:17.550: INFO: Pod "pod-configmaps-4974a2c2-5651-4ce6-b9c9-17d9deb11dd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014328668s STEP: Saw pod success May 30 00:49:17.550: INFO: Pod "pod-configmaps-4974a2c2-5651-4ce6-b9c9-17d9deb11dd9" satisfied condition "Succeeded or Failed" May 30 00:49:17.554: INFO: Trying to get logs from node latest-worker pod pod-configmaps-4974a2c2-5651-4ce6-b9c9-17d9deb11dd9 container configmap-volume-test: STEP: delete the pod May 30 00:49:17.589: INFO: Waiting for pod pod-configmaps-4974a2c2-5651-4ce6-b9c9-17d9deb11dd9 to disappear May 30 00:49:17.620: INFO: Pod pod-configmaps-4974a2c2-5651-4ce6-b9c9-17d9deb11dd9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:49:17.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7806" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":208,"skipped":3282,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:49:17.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 30 00:49:17.748: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:49:23.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7156" for this suite. • [SLOW TEST:6.297 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":209,"skipped":3283,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:49:23.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:49:40.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7331" for this suite. • [SLOW TEST:16.500 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":210,"skipped":3294,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:49:40.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:49:40.886: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:49:47.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5935" for this suite. • [SLOW TEST:6.823 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":211,"skipped":3303,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:49:47.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 30 00:49:54.800: INFO: 9 pods remaining May 30 00:49:54.800: INFO: 0 pods has nil DeletionTimestamp May 30 00:49:54.800: INFO: May 30 00:49:56.443: INFO: 0 pods remaining May 30 00:49:56.443: INFO: 0 pods has nil DeletionTimestamp May 30 00:49:56.443: INFO: May 30 00:49:57.526: INFO: 0 pods remaining May 30 00:49:57.526: INFO: 0 pods has nil DeletionTimestamp May 30 00:49:57.526: INFO: STEP: Gathering metrics W0530 00:49:58.277497 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 30 00:49:58.277: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:49:58.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-102" for this suite. • [SLOW TEST:11.458 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":212,"skipped":3312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:49:58.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:49:59.186: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7fe3dc5e-82d0-4b77-b94e-8175d74e8af3" in namespace "projected-6750" to be "Succeeded or Failed" May 30 00:49:59.242: INFO: Pod "downwardapi-volume-7fe3dc5e-82d0-4b77-b94e-8175d74e8af3": Phase="Pending", Reason="", readiness=false. Elapsed: 55.983141ms May 30 00:50:01.246: INFO: Pod "downwardapi-volume-7fe3dc5e-82d0-4b77-b94e-8175d74e8af3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06020548s May 30 00:50:03.250: INFO: Pod "downwardapi-volume-7fe3dc5e-82d0-4b77-b94e-8175d74e8af3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064233215s STEP: Saw pod success May 30 00:50:03.250: INFO: Pod "downwardapi-volume-7fe3dc5e-82d0-4b77-b94e-8175d74e8af3" satisfied condition "Succeeded or Failed" May 30 00:50:03.252: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7fe3dc5e-82d0-4b77-b94e-8175d74e8af3 container client-container: STEP: delete the pod May 30 00:50:03.283: INFO: Waiting for pod downwardapi-volume-7fe3dc5e-82d0-4b77-b94e-8175d74e8af3 to disappear May 30 00:50:03.311: INFO: Pod downwardapi-volume-7fe3dc5e-82d0-4b77-b94e-8175d74e8af3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:50:03.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6750" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":213,"skipped":3377,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:50:03.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-5851/configmap-test-33a1b135-d411-4dc7-8cfb-115586ec6c2d STEP: Creating a pod to test consume configMaps May 30 00:50:03.388: INFO: Waiting up to 5m0s for pod "pod-configmaps-173e1474-48e8-4229-bf7a-095d557ef42b" in namespace "configmap-5851" to be "Succeeded or Failed" May 30 00:50:03.392: INFO: Pod "pod-configmaps-173e1474-48e8-4229-bf7a-095d557ef42b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.967924ms May 30 00:50:05.398: INFO: Pod "pod-configmaps-173e1474-48e8-4229-bf7a-095d557ef42b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009242281s May 30 00:50:07.402: INFO: Pod "pod-configmaps-173e1474-48e8-4229-bf7a-095d557ef42b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013578985s STEP: Saw pod success May 30 00:50:07.402: INFO: Pod "pod-configmaps-173e1474-48e8-4229-bf7a-095d557ef42b" satisfied condition "Succeeded or Failed" May 30 00:50:07.405: INFO: Trying to get logs from node latest-worker pod pod-configmaps-173e1474-48e8-4229-bf7a-095d557ef42b container env-test: STEP: delete the pod May 30 00:50:07.439: INFO: Waiting for pod pod-configmaps-173e1474-48e8-4229-bf7a-095d557ef42b to disappear May 30 00:50:07.473: INFO: Pod pod-configmaps-173e1474-48e8-4229-bf7a-095d557ef42b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:50:07.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5851" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":214,"skipped":3383,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:50:07.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 30 00:50:12.107: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4019 pod-service-account-3693e8fa-cb12-4635-bbec-3dd476f66d82 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 30 00:50:12.351: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4019 pod-service-account-3693e8fa-cb12-4635-bbec-3dd476f66d82 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 30 00:50:12.594: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4019 pod-service-account-3693e8fa-cb12-4635-bbec-3dd476f66d82 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:50:12.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4019" for this suite. • [SLOW TEST:5.324 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":215,"skipped":3395,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:50:12.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 30 00:50:15.937: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:50:16.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8207" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":216,"skipped":3410,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:50:16.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 30 00:50:16.117: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 30 00:50:16.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1148' May 30 00:50:16.466: INFO: stderr: "" May 30 00:50:16.466: INFO: stdout: "service/agnhost-slave created\n" May 30 00:50:16.467: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 30 00:50:16.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1148' May 30 00:50:16.825: INFO: stderr: "" May 30 00:50:16.825: INFO: stdout: "service/agnhost-master created\n" May 30 00:50:16.826: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 30 00:50:16.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1148' May 30 00:50:17.224: INFO: stderr: "" May 30 00:50:17.224: INFO: stdout: "service/frontend created\n" May 30 00:50:17.224: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 30 00:50:17.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1148' May 30 00:50:17.487: INFO: stderr: "" May 30 00:50:17.487: INFO: stdout: "deployment.apps/frontend created\n" May 30 00:50:17.487: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 30 00:50:17.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1148' May 30 00:50:17.777: INFO: stderr: "" May 30 00:50:17.777: INFO: stdout: "deployment.apps/agnhost-master created\n" May 30 00:50:17.778: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 30 00:50:17.778: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1148' May 30 00:50:18.076: INFO: stderr: "" May 30 00:50:18.076: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 30 00:50:18.076: INFO: Waiting for all frontend pods to be Running. May 30 00:50:28.126: INFO: Waiting for frontend to serve content. May 30 00:50:28.137: INFO: Trying to add a new entry to the guestbook. May 30 00:50:28.155: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 30 00:50:28.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1148' May 30 00:50:28.375: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 00:50:28.375: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 30 00:50:28.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1148' May 30 00:50:28.623: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 00:50:28.623: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 30 00:50:28.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1148' May 30 00:50:28.785: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 00:50:28.785: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 30 00:50:28.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1148' May 30 00:50:28.898: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 00:50:28.898: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 30 00:50:28.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1148' May 30 00:50:29.033: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 00:50:29.033: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 30 00:50:29.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1148' May 30 00:50:29.492: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 30 00:50:29.492: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:50:29.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1148" for this suite. • [SLOW TEST:13.837 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":217,"skipped":3430,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:50:29.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:50:43.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1089" for this suite. • [SLOW TEST:13.592 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":218,"skipped":3447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:50:43.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 30 00:50:43.620: INFO: >>> kubeConfig: /root/.kube/config May 30 00:50:46.569: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:50:57.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2574" for this suite. • [SLOW TEST:13.763 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":219,"skipped":3492,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:50:57.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-6rhf STEP: Creating a pod to test atomic-volume-subpath May 30 00:50:57.393: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6rhf" in namespace "subpath-3691" to be "Succeeded or Failed" May 30 00:50:57.397: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199407ms May 30 00:50:59.525: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132374937s May 30 00:51:01.530: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Running", Reason="", readiness=true. Elapsed: 4.136981602s May 30 00:51:03.534: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Running", Reason="", readiness=true. Elapsed: 6.141210186s May 30 00:51:05.538: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Running", Reason="", readiness=true. Elapsed: 8.145098892s May 30 00:51:07.542: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Running", Reason="", readiness=true. Elapsed: 10.14879007s May 30 00:51:09.547: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Running", Reason="", readiness=true. Elapsed: 12.153642455s May 30 00:51:11.551: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Running", Reason="", readiness=true. Elapsed: 14.158336038s May 30 00:51:13.559: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Running", Reason="", readiness=true. Elapsed: 16.165498798s May 30 00:51:15.563: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Running", Reason="", readiness=true. Elapsed: 18.169580811s May 30 00:51:17.567: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Running", Reason="", readiness=true. Elapsed: 20.173929573s May 30 00:51:19.572: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Running", Reason="", readiness=true. Elapsed: 22.178715009s May 30 00:51:21.576: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Running", Reason="", readiness=true. Elapsed: 24.183283686s May 30 00:51:23.581: INFO: Pod "pod-subpath-test-configmap-6rhf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.188128138s STEP: Saw pod success May 30 00:51:23.581: INFO: Pod "pod-subpath-test-configmap-6rhf" satisfied condition "Succeeded or Failed" May 30 00:51:23.585: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-6rhf container test-container-subpath-configmap-6rhf: STEP: delete the pod May 30 00:51:23.629: INFO: Waiting for pod pod-subpath-test-configmap-6rhf to disappear May 30 00:51:23.651: INFO: Pod pod-subpath-test-configmap-6rhf no longer exists STEP: Deleting pod pod-subpath-test-configmap-6rhf May 30 00:51:23.651: INFO: Deleting pod "pod-subpath-test-configmap-6rhf" in namespace "subpath-3691" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:51:23.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3691" for this suite. • [SLOW TEST:26.399 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":220,"skipped":3493,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:51:23.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-f8e00385-112b-4bc7-87f8-e9058035ebe7 STEP: Creating a pod to test consume secrets May 30 00:51:23.797: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d6939903-2538-4202-bcb7-2b83d01aabc5" in namespace "projected-5572" to be "Succeeded or Failed" May 30 00:51:23.801: INFO: Pod "pod-projected-secrets-d6939903-2538-4202-bcb7-2b83d01aabc5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059785ms May 30 00:51:25.806: INFO: Pod "pod-projected-secrets-d6939903-2538-4202-bcb7-2b83d01aabc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008669298s May 30 00:51:27.810: INFO: Pod "pod-projected-secrets-d6939903-2538-4202-bcb7-2b83d01aabc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01248397s STEP: Saw pod success May 30 00:51:27.810: INFO: Pod "pod-projected-secrets-d6939903-2538-4202-bcb7-2b83d01aabc5" satisfied condition "Succeeded or Failed" May 30 00:51:27.812: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d6939903-2538-4202-bcb7-2b83d01aabc5 container secret-volume-test: STEP: delete the pod May 30 00:51:27.930: INFO: Waiting for pod pod-projected-secrets-d6939903-2538-4202-bcb7-2b83d01aabc5 to disappear May 30 00:51:27.979: INFO: Pod pod-projected-secrets-d6939903-2538-4202-bcb7-2b83d01aabc5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:51:27.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5572" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":221,"skipped":3501,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:51:27.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4665 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-4665 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4665 May 30 00:51:28.147: INFO: Found 0 stateful pods, waiting for 1 May 30 00:51:38.153: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 30 00:51:38.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4665 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 00:51:38.388: INFO: stderr: "I0530 00:51:38.285623 2816 log.go:172] (0xc000b11810) (0xc00068a640) Create stream\nI0530 00:51:38.285697 2816 log.go:172] (0xc000b11810) (0xc00068a640) Stream added, broadcasting: 1\nI0530 00:51:38.288085 2816 log.go:172] (0xc000b11810) Reply frame received for 1\nI0530 00:51:38.288121 2816 log.go:172] (0xc000b11810) (0xc0006d6f00) Create stream\nI0530 00:51:38.288134 2816 log.go:172] (0xc000b11810) (0xc0006d6f00) Stream added, broadcasting: 3\nI0530 00:51:38.289071 2816 log.go:172] (0xc000b11810) Reply frame received for 3\nI0530 00:51:38.289253 2816 log.go:172] (0xc000b11810) (0xc00068afa0) Create stream\nI0530 00:51:38.289276 2816 log.go:172] (0xc000b11810) (0xc00068afa0) Stream added, broadcasting: 5\nI0530 00:51:38.290216 2816 log.go:172] (0xc000b11810) Reply frame received for 5\nI0530 00:51:38.357747 2816 log.go:172] (0xc000b11810) Data frame received for 5\nI0530 00:51:38.357769 2816 log.go:172] (0xc00068afa0) (5) Data frame handling\nI0530 00:51:38.357782 2816 log.go:172] (0xc00068afa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 00:51:38.379419 2816 log.go:172] (0xc000b11810) Data frame received for 3\nI0530 00:51:38.379456 2816 log.go:172] (0xc0006d6f00) (3) Data frame handling\nI0530 00:51:38.379486 2816 log.go:172] (0xc0006d6f00) (3) Data frame sent\nI0530 00:51:38.380056 2816 log.go:172] (0xc000b11810) Data frame received for 5\nI0530 00:51:38.380077 2816 log.go:172] (0xc00068afa0) (5) Data frame handling\nI0530 00:51:38.380101 2816 log.go:172] (0xc000b11810) Data frame received for 3\nI0530 00:51:38.380113 2816 log.go:172] (0xc0006d6f00) (3) Data frame handling\nI0530 00:51:38.381903 2816 log.go:172] (0xc000b11810) Data frame received for 1\nI0530 00:51:38.381929 2816 log.go:172] (0xc00068a640) (1) Data frame handling\nI0530 00:51:38.381956 2816 log.go:172] (0xc00068a640) (1) Data frame sent\nI0530 00:51:38.381972 2816 log.go:172] (0xc000b11810) (0xc00068a640) Stream removed, broadcasting: 1\nI0530 00:51:38.381988 2816 log.go:172] (0xc000b11810) Go away received\nI0530 00:51:38.382327 2816 log.go:172] (0xc000b11810) (0xc00068a640) Stream removed, broadcasting: 1\nI0530 00:51:38.382341 2816 log.go:172] (0xc000b11810) (0xc0006d6f00) Stream removed, broadcasting: 3\nI0530 00:51:38.382349 2816 log.go:172] (0xc000b11810) (0xc00068afa0) Stream removed, broadcasting: 5\n" May 30 00:51:38.388: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 00:51:38.388: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 00:51:38.392: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 30 00:51:48.397: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 30 00:51:48.397: INFO: Waiting for statefulset status.replicas updated to 0 May 30 00:51:48.419: INFO: POD NODE PHASE GRACE CONDITIONS May 30 00:51:48.419: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:28 +0000 UTC }] May 30 00:51:48.419: INFO: May 30 00:51:48.419: INFO: StatefulSet ss has not reached scale 3, at 1 May 30 00:51:49.424: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989620335s May 30 00:51:50.428: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985456871s May 30 00:51:51.511: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980634119s May 30 00:51:52.517: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.897959637s May 30 00:51:53.522: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.891741474s May 30 00:51:54.528: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.886877427s May 30 00:51:55.533: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.881347811s May 30 00:51:56.537: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.876150536s May 30 00:51:57.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 871.780809ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4665 May 30 00:51:58.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4665 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 00:51:58.769: INFO: stderr: "I0530 00:51:58.683513 2835 log.go:172] (0xc000b94840) (0xc00044c3c0) Create stream\nI0530 00:51:58.683586 2835 log.go:172] (0xc000b94840) (0xc00044c3c0) Stream added, broadcasting: 1\nI0530 00:51:58.686185 2835 log.go:172] (0xc000b94840) Reply frame received for 1\nI0530 00:51:58.686230 2835 log.go:172] (0xc000b94840) (0xc0003f8320) Create stream\nI0530 00:51:58.686242 2835 log.go:172] (0xc000b94840) (0xc0003f8320) Stream added, broadcasting: 3\nI0530 00:51:58.687026 2835 log.go:172] (0xc000b94840) Reply frame received for 3\nI0530 00:51:58.687049 2835 log.go:172] (0xc000b94840) (0xc0003f8aa0) Create stream\nI0530 00:51:58.687061 2835 log.go:172] (0xc000b94840) (0xc0003f8aa0) Stream added, broadcasting: 5\nI0530 00:51:58.687986 2835 log.go:172] (0xc000b94840) Reply frame received for 5\nI0530 00:51:58.762012 2835 log.go:172] (0xc000b94840) Data frame received for 3\nI0530 00:51:58.762044 2835 log.go:172] (0xc0003f8320) (3) Data frame handling\nI0530 00:51:58.762078 2835 log.go:172] (0xc000b94840) Data frame received for 5\nI0530 00:51:58.762140 2835 log.go:172] (0xc0003f8aa0) (5) Data frame handling\nI0530 00:51:58.762165 2835 log.go:172] (0xc0003f8aa0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 00:51:58.762189 2835 log.go:172] (0xc0003f8320) (3) Data frame sent\nI0530 00:51:58.762218 2835 log.go:172] (0xc000b94840) Data frame received for 3\nI0530 00:51:58.762231 2835 log.go:172] (0xc0003f8320) (3) Data frame handling\nI0530 00:51:58.762378 2835 log.go:172] (0xc000b94840) Data frame received for 5\nI0530 00:51:58.762399 2835 log.go:172] (0xc0003f8aa0) (5) Data frame handling\nI0530 00:51:58.764230 2835 log.go:172] (0xc000b94840) Data frame received for 1\nI0530 00:51:58.764257 2835 log.go:172] (0xc00044c3c0) (1) Data frame handling\nI0530 00:51:58.764272 2835 log.go:172] (0xc00044c3c0) (1) Data frame sent\nI0530 00:51:58.764290 2835 log.go:172] (0xc000b94840) (0xc00044c3c0) Stream removed, broadcasting: 1\nI0530 00:51:58.764332 2835 log.go:172] (0xc000b94840) Go away received\nI0530 00:51:58.764709 2835 log.go:172] (0xc000b94840) (0xc00044c3c0) Stream removed, broadcasting: 1\nI0530 00:51:58.764738 2835 log.go:172] (0xc000b94840) (0xc0003f8320) Stream removed, broadcasting: 3\nI0530 00:51:58.764751 2835 log.go:172] (0xc000b94840) (0xc0003f8aa0) Stream removed, broadcasting: 5\n" May 30 00:51:58.769: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 00:51:58.769: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 00:51:58.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4665 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 00:51:59.009: INFO: stderr: "I0530 00:51:58.930322 2856 log.go:172] (0xc00065f340) (0xc00065d680) Create stream\nI0530 00:51:58.930390 2856 log.go:172] (0xc00065f340) (0xc00065d680) Stream added, broadcasting: 1\nI0530 00:51:58.935545 2856 log.go:172] (0xc00065f340) Reply frame received for 1\nI0530 00:51:58.935588 2856 log.go:172] (0xc00065f340) (0xc000646be0) Create stream\nI0530 00:51:58.935599 2856 log.go:172] (0xc00065f340) (0xc000646be0) Stream added, broadcasting: 3\nI0530 00:51:58.936353 2856 log.go:172] (0xc00065f340) Reply frame received for 3\nI0530 00:51:58.936385 2856 log.go:172] (0xc00065f340) (0xc00063de00) Create stream\nI0530 00:51:58.936397 2856 log.go:172] (0xc00065f340) (0xc00063de00) Stream added, broadcasting: 5\nI0530 00:51:58.937339 2856 log.go:172] (0xc00065f340) Reply frame received for 5\nI0530 00:51:59.003896 2856 log.go:172] (0xc00065f340) Data frame received for 3\nI0530 00:51:59.003929 2856 log.go:172] (0xc000646be0) (3) Data frame handling\nI0530 00:51:59.003941 2856 log.go:172] (0xc000646be0) (3) Data frame sent\nI0530 00:51:59.003947 2856 log.go:172] (0xc00065f340) Data frame received for 3\nI0530 00:51:59.003953 2856 log.go:172] (0xc000646be0) (3) Data frame handling\nI0530 00:51:59.003988 2856 log.go:172] (0xc00065f340) Data frame received for 5\nI0530 00:51:59.004031 2856 log.go:172] (0xc00063de00) (5) Data frame handling\nI0530 00:51:59.004050 2856 log.go:172] (0xc00063de00) (5) Data frame sent\nI0530 00:51:59.004066 2856 log.go:172] (0xc00065f340) Data frame received for 5\nI0530 00:51:59.004076 2856 log.go:172] (0xc00063de00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0530 00:51:59.005585 2856 log.go:172] (0xc00065f340) Data frame received for 1\nI0530 00:51:59.005610 2856 log.go:172] (0xc00065d680) (1) Data frame handling\nI0530 00:51:59.005626 2856 log.go:172] (0xc00065d680) (1) Data frame sent\nI0530 00:51:59.005654 2856 log.go:172] (0xc00065f340) (0xc00065d680) Stream removed, broadcasting: 1\nI0530 00:51:59.005718 2856 log.go:172] (0xc00065f340) Go away received\nI0530 00:51:59.005943 2856 log.go:172] (0xc00065f340) (0xc00065d680) Stream removed, broadcasting: 1\nI0530 00:51:59.005958 2856 log.go:172] (0xc00065f340) (0xc000646be0) Stream removed, broadcasting: 3\nI0530 00:51:59.005964 2856 log.go:172] (0xc00065f340) (0xc00063de00) Stream removed, broadcasting: 5\n" May 30 00:51:59.009: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 00:51:59.009: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 00:51:59.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4665 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 00:51:59.217: INFO: stderr: "I0530 00:51:59.132318 2877 log.go:172] (0xc0009c8840) (0xc00055a820) Create stream\nI0530 00:51:59.132375 2877 log.go:172] (0xc0009c8840) (0xc00055a820) Stream added, broadcasting: 1\nI0530 00:51:59.134611 2877 log.go:172] (0xc0009c8840) Reply frame received for 1\nI0530 00:51:59.134670 2877 log.go:172] (0xc0009c8840) (0xc0005545a0) Create stream\nI0530 00:51:59.134681 2877 log.go:172] (0xc0009c8840) (0xc0005545a0) Stream added, broadcasting: 3\nI0530 00:51:59.135535 2877 log.go:172] (0xc0009c8840) Reply frame received for 3\nI0530 00:51:59.135571 2877 log.go:172] (0xc0009c8840) (0xc0004fa280) Create stream\nI0530 00:51:59.135584 2877 log.go:172] (0xc0009c8840) (0xc0004fa280) Stream added, broadcasting: 5\nI0530 00:51:59.136526 2877 log.go:172] (0xc0009c8840) Reply frame received for 5\nI0530 00:51:59.209823 2877 log.go:172] (0xc0009c8840) Data frame received for 3\nI0530 00:51:59.209863 2877 log.go:172] (0xc0005545a0) (3) Data frame handling\nI0530 00:51:59.209889 2877 log.go:172] (0xc0005545a0) (3) Data frame sent\nI0530 00:51:59.209904 2877 log.go:172] (0xc0009c8840) Data frame received for 3\nI0530 00:51:59.209916 2877 log.go:172] (0xc0005545a0) (3) Data frame handling\nI0530 00:51:59.209933 2877 log.go:172] (0xc0009c8840) Data frame received for 5\nI0530 00:51:59.209997 2877 log.go:172] (0xc0004fa280) (5) Data frame handling\nI0530 00:51:59.210026 2877 log.go:172] (0xc0004fa280) (5) Data frame sent\nI0530 00:51:59.210041 2877 log.go:172] (0xc0009c8840) Data frame received for 5\nI0530 00:51:59.210051 2877 log.go:172] (0xc0004fa280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0530 00:51:59.211242 2877 log.go:172] (0xc0009c8840) Data frame received for 1\nI0530 00:51:59.211264 2877 log.go:172] (0xc00055a820) (1) Data frame handling\nI0530 00:51:59.211277 2877 log.go:172] (0xc00055a820) (1) Data frame sent\nI0530 00:51:59.211290 2877 log.go:172] (0xc0009c8840) (0xc00055a820) Stream removed, broadcasting: 1\nI0530 00:51:59.211323 2877 log.go:172] (0xc0009c8840) Go away received\nI0530 00:51:59.211676 2877 log.go:172] (0xc0009c8840) (0xc00055a820) Stream removed, broadcasting: 1\nI0530 00:51:59.211699 2877 log.go:172] (0xc0009c8840) (0xc0005545a0) Stream removed, broadcasting: 3\nI0530 00:51:59.211709 2877 log.go:172] (0xc0009c8840) (0xc0004fa280) Stream removed, broadcasting: 5\n" May 30 00:51:59.217: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 00:51:59.217: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 00:51:59.222: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 30 00:51:59.222: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 30 00:51:59.222: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 30 00:51:59.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4665 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 00:51:59.436: INFO: stderr: "I0530 00:51:59.356961 2900 log.go:172] (0xc000c051e0) (0xc00068dcc0) Create stream\nI0530 00:51:59.357013 2900 log.go:172] (0xc000c051e0) (0xc00068dcc0) Stream added, broadcasting: 1\nI0530 00:51:59.365303 2900 log.go:172] (0xc000c051e0) Reply frame received for 1\nI0530 00:51:59.365350 2900 log.go:172] (0xc000c051e0) (0xc000681d60) Create stream\nI0530 00:51:59.365441 2900 log.go:172] (0xc000c051e0) (0xc000681d60) Stream added, broadcasting: 3\nI0530 00:51:59.366319 2900 log.go:172] (0xc000c051e0) Reply frame received for 3\nI0530 00:51:59.366351 2900 log.go:172] (0xc000c051e0) (0xc00058e320) Create stream\nI0530 00:51:59.366370 2900 log.go:172] (0xc000c051e0) (0xc00058e320) Stream added, broadcasting: 5\nI0530 00:51:59.367309 2900 log.go:172] (0xc000c051e0) Reply frame received for 5\nI0530 00:51:59.429799 2900 log.go:172] (0xc000c051e0) Data frame received for 3\nI0530 00:51:59.429856 2900 log.go:172] (0xc000681d60) (3) Data frame handling\nI0530 00:51:59.429876 2900 log.go:172] (0xc000681d60) (3) Data frame sent\nI0530 00:51:59.429892 2900 log.go:172] (0xc000c051e0) Data frame received for 3\nI0530 00:51:59.429906 2900 log.go:172] (0xc000681d60) (3) Data frame handling\nI0530 00:51:59.429928 2900 log.go:172] (0xc000c051e0) Data frame received for 5\nI0530 00:51:59.429954 2900 log.go:172] (0xc00058e320) (5) Data frame handling\nI0530 00:51:59.429971 2900 log.go:172] (0xc00058e320) (5) Data frame sent\nI0530 00:51:59.429987 2900 log.go:172] (0xc000c051e0) Data frame received for 5\nI0530 00:51:59.430003 2900 log.go:172] (0xc00058e320) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 00:51:59.431435 2900 log.go:172] (0xc000c051e0) Data frame received for 1\nI0530 00:51:59.431453 2900 log.go:172] (0xc00068dcc0) (1) Data frame handling\nI0530 00:51:59.431466 2900 log.go:172] (0xc00068dcc0) (1) Data frame sent\nI0530 00:51:59.431477 2900 log.go:172] (0xc000c051e0) (0xc00068dcc0) Stream removed, broadcasting: 1\nI0530 00:51:59.431488 2900 log.go:172] (0xc000c051e0) Go away received\nI0530 00:51:59.431795 2900 log.go:172] (0xc000c051e0) (0xc00068dcc0) Stream removed, broadcasting: 1\nI0530 00:51:59.431815 2900 log.go:172] (0xc000c051e0) (0xc000681d60) Stream removed, broadcasting: 3\nI0530 00:51:59.431824 2900 log.go:172] (0xc000c051e0) (0xc00058e320) Stream removed, broadcasting: 5\n" May 30 00:51:59.436: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 00:51:59.436: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 00:51:59.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4665 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 00:51:59.703: INFO: stderr: "I0530 00:51:59.568183 2919 log.go:172] (0xc000979550) (0xc000a4c1e0) Create stream\nI0530 00:51:59.568241 2919 log.go:172] (0xc000979550) (0xc000a4c1e0) Stream added, broadcasting: 1\nI0530 00:51:59.572440 2919 log.go:172] (0xc000979550) Reply frame received for 1\nI0530 00:51:59.572472 2919 log.go:172] (0xc000979550) (0xc0006ece60) Create stream\nI0530 00:51:59.572482 2919 log.go:172] (0xc000979550) (0xc0006ece60) Stream added, broadcasting: 3\nI0530 00:51:59.573418 2919 log.go:172] (0xc000979550) Reply frame received for 3\nI0530 00:51:59.573461 2919 log.go:172] (0xc000979550) (0xc0006e05a0) Create stream\nI0530 00:51:59.573477 2919 log.go:172] (0xc000979550) (0xc0006e05a0) Stream added, broadcasting: 5\nI0530 00:51:59.574341 2919 log.go:172] (0xc000979550) Reply frame received for 5\nI0530 00:51:59.662199 2919 log.go:172] (0xc000979550) Data frame received for 5\nI0530 00:51:59.662231 2919 log.go:172] (0xc0006e05a0) (5) Data frame handling\nI0530 00:51:59.662253 2919 log.go:172] (0xc0006e05a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 00:51:59.694931 2919 log.go:172] (0xc000979550) Data frame received for 3\nI0530 00:51:59.694953 2919 log.go:172] (0xc0006ece60) (3) Data frame handling\nI0530 00:51:59.694961 2919 log.go:172] (0xc0006ece60) (3) Data frame sent\nI0530 00:51:59.695406 2919 log.go:172] (0xc000979550) Data frame received for 3\nI0530 00:51:59.695441 2919 log.go:172] (0xc0006ece60) (3) Data frame handling\nI0530 00:51:59.695842 2919 log.go:172] (0xc000979550) Data frame received for 5\nI0530 00:51:59.696055 2919 log.go:172] (0xc0006e05a0) (5) Data frame handling\nI0530 00:51:59.698082 2919 log.go:172] (0xc000979550) Data frame received for 1\nI0530 00:51:59.698104 2919 log.go:172] (0xc000a4c1e0) (1) Data frame handling\nI0530 00:51:59.698122 2919 log.go:172] (0xc000a4c1e0) (1) Data frame sent\nI0530 00:51:59.698274 2919 log.go:172] (0xc000979550) (0xc000a4c1e0) Stream removed, broadcasting: 1\nI0530 00:51:59.698507 2919 log.go:172] (0xc000979550) Go away received\nI0530 00:51:59.698562 2919 log.go:172] (0xc000979550) (0xc000a4c1e0) Stream removed, broadcasting: 1\nI0530 00:51:59.698576 2919 log.go:172] (0xc000979550) (0xc0006ece60) Stream removed, broadcasting: 3\nI0530 00:51:59.698587 2919 log.go:172] (0xc000979550) (0xc0006e05a0) Stream removed, broadcasting: 5\n" May 30 00:51:59.703: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 00:51:59.703: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 00:51:59.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4665 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 00:52:00.009: INFO: stderr: "I0530 00:51:59.897898 2938 log.go:172] (0xc00003ad10) (0xc00013b5e0) Create stream\nI0530 00:51:59.897972 2938 log.go:172] (0xc00003ad10) (0xc00013b5e0) Stream added, broadcasting: 1\nI0530 00:51:59.900987 2938 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0530 00:51:59.901036 2938 log.go:172] (0xc00003ad10) (0xc0003841e0) Create stream\nI0530 00:51:59.901051 2938 log.go:172] (0xc00003ad10) (0xc0003841e0) Stream added, broadcasting: 3\nI0530 00:51:59.902363 2938 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0530 00:51:59.902389 2938 log.go:172] (0xc00003ad10) (0xc00013be00) Create stream\nI0530 00:51:59.902396 2938 log.go:172] (0xc00003ad10) (0xc00013be00) Stream added, broadcasting: 5\nI0530 00:51:59.903382 2938 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0530 00:51:59.966894 2938 log.go:172] (0xc00003ad10) Data frame received for 5\nI0530 00:51:59.966922 2938 log.go:172] (0xc00013be00) (5) Data frame handling\nI0530 00:51:59.966943 2938 log.go:172] (0xc00013be00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 00:52:00.000894 2938 log.go:172] (0xc00003ad10) Data frame received for 3\nI0530 00:52:00.000943 2938 log.go:172] (0xc0003841e0) (3) Data frame handling\nI0530 00:52:00.001084 2938 log.go:172] (0xc0003841e0) (3) Data frame sent\nI0530 00:52:00.001547 2938 log.go:172] (0xc00003ad10) Data frame received for 3\nI0530 00:52:00.001596 2938 log.go:172] (0xc0003841e0) (3) Data frame handling\nI0530 00:52:00.001631 2938 log.go:172] (0xc00003ad10) Data frame received for 5\nI0530 00:52:00.001659 2938 log.go:172] (0xc00013be00) (5) Data frame handling\nI0530 00:52:00.003372 2938 log.go:172] (0xc00003ad10) Data frame received for 1\nI0530 00:52:00.003396 2938 log.go:172] (0xc00013b5e0) (1) Data frame handling\nI0530 00:52:00.003409 2938 log.go:172] (0xc00013b5e0) (1) Data frame sent\nI0530 00:52:00.003425 2938 log.go:172] (0xc00003ad10) (0xc00013b5e0) Stream removed, broadcasting: 1\nI0530 00:52:00.003445 2938 log.go:172] (0xc00003ad10) Go away received\nI0530 00:52:00.003950 2938 log.go:172] (0xc00003ad10) (0xc00013b5e0) Stream removed, broadcasting: 1\nI0530 00:52:00.003978 2938 log.go:172] (0xc00003ad10) (0xc0003841e0) Stream removed, broadcasting: 3\nI0530 00:52:00.003990 2938 log.go:172] (0xc00003ad10) (0xc00013be00) Stream removed, broadcasting: 5\n" May 30 00:52:00.009: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 00:52:00.010: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 00:52:00.010: INFO: Waiting for statefulset status.replicas updated to 0 May 30 00:52:00.013: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 30 00:52:10.021: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 30 00:52:10.021: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 30 00:52:10.021: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 30 00:52:10.051: INFO: POD NODE PHASE GRACE CONDITIONS May 30 00:52:10.051: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:28 +0000 UTC }] May 30 00:52:10.051: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC }] May 30 00:52:10.051: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC }] May 30 00:52:10.051: INFO: May 30 00:52:10.051: INFO: StatefulSet ss has not reached scale 0, at 3 May 30 00:52:11.056: INFO: POD NODE PHASE GRACE CONDITIONS May 30 00:52:11.056: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:28 +0000 UTC }] May 30 00:52:11.056: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC }] May 30 00:52:11.056: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC }] May 30 00:52:11.056: INFO: May 30 00:52:11.056: INFO: StatefulSet ss has not reached scale 0, at 3 May 30 00:52:12.080: INFO: POD NODE PHASE GRACE CONDITIONS May 30 00:52:12.080: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:28 +0000 UTC }] May 30 00:52:12.080: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC }] May 30 00:52:12.080: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC }] May 30 00:52:12.080: INFO: May 30 00:52:12.080: INFO: StatefulSet ss has not reached scale 0, at 3 May 30 00:52:13.086: INFO: POD NODE PHASE GRACE CONDITIONS May 30 00:52:13.086: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:28 +0000 UTC }] May 30 00:52:13.086: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC }] May 30 00:52:13.086: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:52:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:48 +0000 UTC }] May 30 00:52:13.086: INFO: May 30 00:52:13.086: INFO: StatefulSet ss has not reached scale 0, at 3 May 30 00:52:14.091: INFO: POD NODE PHASE GRACE CONDITIONS May 30 00:52:14.091: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-30 00:51:28 +0000 UTC }] May 30 00:52:14.091: INFO: May 30 00:52:14.091: INFO: StatefulSet ss has not reached scale 0, at 1 May 30 00:52:15.094: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.937890776s May 30 00:52:16.097: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.935086549s May 30 00:52:17.101: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.932068644s May 30 00:52:18.105: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.927650831s May 30 00:52:19.109: INFO: Verifying statefulset ss doesn't scale past 0 for another 923.882597ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4665 May 30 00:52:20.113: INFO: Scaling statefulset ss to 0 May 30 00:52:20.128: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 30 00:52:20.133: INFO: Deleting all statefulset in ns statefulset-4665 May 30 00:52:20.135: INFO: Scaling statefulset ss to 0 May 30 00:52:20.142: INFO: Waiting for statefulset status.replicas updated to 0 May 30 00:52:20.144: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:52:20.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4665" for this suite. • [SLOW TEST:52.175 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":222,"skipped":3517,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:52:20.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 30 00:52:20.243: INFO: Waiting up to 5m0s for pod "pod-29adef44-b476-4463-82ac-196c08a1d51b" in namespace "emptydir-3412" to be "Succeeded or Failed" May 30 00:52:20.247: INFO: Pod "pod-29adef44-b476-4463-82ac-196c08a1d51b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.912941ms May 30 00:52:22.331: INFO: Pod "pod-29adef44-b476-4463-82ac-196c08a1d51b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088206264s May 30 00:52:24.336: INFO: Pod "pod-29adef44-b476-4463-82ac-196c08a1d51b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092888371s STEP: Saw pod success May 30 00:52:24.336: INFO: Pod "pod-29adef44-b476-4463-82ac-196c08a1d51b" satisfied condition "Succeeded or Failed" May 30 00:52:24.340: INFO: Trying to get logs from node latest-worker2 pod pod-29adef44-b476-4463-82ac-196c08a1d51b container test-container: STEP: delete the pod May 30 00:52:24.375: INFO: Waiting for pod pod-29adef44-b476-4463-82ac-196c08a1d51b to disappear May 30 00:52:24.394: INFO: Pod pod-29adef44-b476-4463-82ac-196c08a1d51b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:52:24.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3412" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":223,"skipped":3559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:52:24.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 30 00:52:24.497: INFO: Waiting up to 5m0s for pod "pod-fc1a6e1f-97a7-4ecf-86ee-eb782d2b2f0b" in namespace "emptydir-5535" to be "Succeeded or Failed" May 30 00:52:24.502: INFO: Pod "pod-fc1a6e1f-97a7-4ecf-86ee-eb782d2b2f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.19506ms May 30 00:52:26.924: INFO: Pod "pod-fc1a6e1f-97a7-4ecf-86ee-eb782d2b2f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42703436s May 30 00:52:29.062: INFO: Pod "pod-fc1a6e1f-97a7-4ecf-86ee-eb782d2b2f0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.565340536s STEP: Saw pod success May 30 00:52:29.062: INFO: Pod "pod-fc1a6e1f-97a7-4ecf-86ee-eb782d2b2f0b" satisfied condition "Succeeded or Failed" May 30 00:52:29.066: INFO: Trying to get logs from node latest-worker pod pod-fc1a6e1f-97a7-4ecf-86ee-eb782d2b2f0b container test-container: STEP: delete the pod May 30 00:52:29.243: INFO: Waiting for pod pod-fc1a6e1f-97a7-4ecf-86ee-eb782d2b2f0b to disappear May 30 00:52:29.265: INFO: Pod pod-fc1a6e1f-97a7-4ecf-86ee-eb782d2b2f0b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:52:29.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5535" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":224,"skipped":3583,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:52:29.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:52:33.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9653" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":225,"skipped":3610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:52:33.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 30 00:52:33.475: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:52:33.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-592" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":226,"skipped":3664,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:52:33.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 00:52:34.289: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 00:52:36.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396754, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396754, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396754, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396754, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 00:52:39.403: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:52:39.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1417-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:52:40.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7443" for this suite. STEP: Destroying namespace "webhook-7443-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.022 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":227,"skipped":3684,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:52:40.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-5401e252-aaea-4696-b534-7f0fab261a94 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:52:40.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8749" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":228,"skipped":3695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:52:40.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:52:40.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba625cef-f948-47f4-9e7b-47ffba8fc5cd" in namespace "downward-api-6383" to be "Succeeded or Failed" May 30 00:52:40.834: INFO: Pod "downwardapi-volume-ba625cef-f948-47f4-9e7b-47ffba8fc5cd": Phase="Pending", Reason="", readiness=false. Elapsed: 75.41977ms May 30 00:52:42.838: INFO: Pod "downwardapi-volume-ba625cef-f948-47f4-9e7b-47ffba8fc5cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079238993s May 30 00:52:44.863: INFO: Pod "downwardapi-volume-ba625cef-f948-47f4-9e7b-47ffba8fc5cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104359338s STEP: Saw pod success May 30 00:52:44.863: INFO: Pod "downwardapi-volume-ba625cef-f948-47f4-9e7b-47ffba8fc5cd" satisfied condition "Succeeded or Failed" May 30 00:52:44.883: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ba625cef-f948-47f4-9e7b-47ffba8fc5cd container client-container: STEP: delete the pod May 30 00:52:44.930: INFO: Waiting for pod downwardapi-volume-ba625cef-f948-47f4-9e7b-47ffba8fc5cd to disappear May 30 00:52:44.950: INFO: Pod downwardapi-volume-ba625cef-f948-47f4-9e7b-47ffba8fc5cd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:52:44.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6383" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":229,"skipped":3720,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:52:44.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:52:45.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3911" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":230,"skipped":3731,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:52:45.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 30 00:52:53.300: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 00:52:53.350: INFO: Pod pod-with-poststart-http-hook still exists May 30 00:52:55.350: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 00:52:55.355: INFO: Pod pod-with-poststart-http-hook still exists May 30 00:52:57.350: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 00:52:57.355: INFO: Pod pod-with-poststart-http-hook still exists May 30 00:52:59.350: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 00:52:59.355: INFO: Pod pod-with-poststart-http-hook still exists May 30 00:53:01.350: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 00:53:01.355: INFO: Pod pod-with-poststart-http-hook still exists May 30 00:53:03.350: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 00:53:03.364: INFO: Pod pod-with-poststart-http-hook still exists May 30 00:53:05.350: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 30 00:53:05.354: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:53:05.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-749" for this suite. • [SLOW TEST:20.172 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":231,"skipped":3756,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:53:05.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 30 00:53:05.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-9766 -- logs-generator --log-lines-total 100 --run-duration 20s' May 30 00:53:05.572: INFO: stderr: "" May 30 00:53:05.572: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 30 00:53:05.572: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 30 00:53:05.573: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9766" to be "running and ready, or succeeded" May 30 00:53:05.631: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 58.365088ms May 30 00:53:07.635: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062820648s May 30 00:53:09.640: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.06767593s May 30 00:53:09.640: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 30 00:53:09.640: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 30 00:53:09.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9766' May 30 00:53:09.758: INFO: stderr: "" May 30 00:53:09.758: INFO: stdout: "I0530 00:53:08.140675 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/vk4 262\nI0530 00:53:08.340810 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/lv6 266\nI0530 00:53:08.540900 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/mhxh 340\nI0530 00:53:08.740992 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/jjd4 425\nI0530 00:53:08.940901 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/g7n 284\nI0530 00:53:09.140908 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/wnf 437\nI0530 00:53:09.340893 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/5gct 562\nI0530 00:53:09.540862 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/9kb9 406\nI0530 00:53:09.740903 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/qsx 507\n" STEP: limiting log lines May 30 00:53:09.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9766 --tail=1' May 30 00:53:09.875: INFO: stderr: "" May 30 00:53:09.875: INFO: stdout: "I0530 00:53:09.740903 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/qsx 507\n" May 30 00:53:09.875: INFO: got output "I0530 00:53:09.740903 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/qsx 507\n" STEP: limiting log bytes May 30 00:53:09.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9766 --limit-bytes=1' May 30 00:53:09.995: INFO: stderr: "" May 30 00:53:09.995: INFO: stdout: "I" May 30 00:53:09.995: INFO: got output "I" STEP: exposing timestamps May 30 00:53:09.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9766 --tail=1 --timestamps' May 30 00:53:10.113: INFO: stderr: "" May 30 00:53:10.113: INFO: stdout: "2020-05-30T00:53:09.941098652Z I0530 00:53:09.940896 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/xnd 408\n" May 30 00:53:10.113: INFO: got output "2020-05-30T00:53:09.941098652Z I0530 00:53:09.940896 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/xnd 408\n" STEP: restricting to a time range May 30 00:53:12.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9766 --since=1s' May 30 00:53:12.724: INFO: stderr: "" May 30 00:53:12.724: INFO: stdout: "I0530 00:53:11.740925 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/6kdp 234\nI0530 00:53:11.940885 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/9q9 551\nI0530 00:53:12.140937 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/ldh 595\nI0530 00:53:12.340828 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/xch2 475\nI0530 00:53:12.540950 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/tj6 292\n" May 30 00:53:12.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9766 --since=24h' May 30 00:53:12.839: INFO: stderr: "" May 30 00:53:12.839: INFO: stdout: "I0530 00:53:08.140675 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/vk4 262\nI0530 00:53:08.340810 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/lv6 266\nI0530 00:53:08.540900 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/mhxh 340\nI0530 00:53:08.740992 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/jjd4 425\nI0530 00:53:08.940901 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/g7n 284\nI0530 00:53:09.140908 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/wnf 437\nI0530 00:53:09.340893 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/5gct 562\nI0530 00:53:09.540862 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/9kb9 406\nI0530 00:53:09.740903 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/qsx 507\nI0530 00:53:09.940896 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/xnd 408\nI0530 00:53:10.140851 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/r72 240\nI0530 00:53:10.340915 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/h5w 378\nI0530 00:53:10.540845 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/gc8 510\nI0530 00:53:10.740897 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/tfd 347\nI0530 00:53:10.940865 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/hnjv 593\nI0530 00:53:11.140817 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/2tt 490\nI0530 00:53:11.340882 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/z86h 512\nI0530 00:53:11.540897 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/lwm 264\nI0530 00:53:11.740925 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/6kdp 234\nI0530 00:53:11.940885 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/9q9 551\nI0530 00:53:12.140937 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/ldh 595\nI0530 00:53:12.340828 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/xch2 475\nI0530 00:53:12.540950 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/tj6 292\nI0530 00:53:12.740849 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/vhm 555\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 30 00:53:12.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9766' May 30 00:53:24.864: INFO: stderr: "" May 30 00:53:24.865: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:53:24.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9766" for this suite. • [SLOW TEST:19.526 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":232,"skipped":3769,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:53:24.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:53:24.951: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b25d964-997e-4181-a27d-1e4c5f59f124" in namespace "downward-api-1930" to be "Succeeded or Failed" May 30 00:53:24.954: INFO: Pod "downwardapi-volume-7b25d964-997e-4181-a27d-1e4c5f59f124": Phase="Pending", Reason="", readiness=false. Elapsed: 3.244563ms May 30 00:53:26.984: INFO: Pod "downwardapi-volume-7b25d964-997e-4181-a27d-1e4c5f59f124": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033326576s May 30 00:53:28.990: INFO: Pod "downwardapi-volume-7b25d964-997e-4181-a27d-1e4c5f59f124": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038612835s STEP: Saw pod success May 30 00:53:28.990: INFO: Pod "downwardapi-volume-7b25d964-997e-4181-a27d-1e4c5f59f124" satisfied condition "Succeeded or Failed" May 30 00:53:28.992: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7b25d964-997e-4181-a27d-1e4c5f59f124 container client-container: STEP: delete the pod May 30 00:53:29.028: INFO: Waiting for pod downwardapi-volume-7b25d964-997e-4181-a27d-1e4c5f59f124 to disappear May 30 00:53:29.046: INFO: Pod downwardapi-volume-7b25d964-997e-4181-a27d-1e4c5f59f124 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:53:29.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1930" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":233,"skipped":3786,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:53:29.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 00:53:30.215: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 00:53:32.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396810, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396810, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396810, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726396810, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 00:53:35.259: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:53:35.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:53:36.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6397" for this suite. STEP: Destroying namespace "webhook-6397-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.438 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":234,"skipped":3795,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:53:36.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:53:41.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9281" for this suite. • [SLOW TEST:5.129 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":235,"skipped":3797,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:53:41.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9778 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9778;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9778 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9778;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9778.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9778.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9778.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9778.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9778.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9778.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9778.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9778.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9778.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9778.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9778.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 182.89.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.89.182_udp@PTR;check="$$(dig +tcp +noall +answer +search 182.89.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.89.182_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9778 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9778;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9778 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9778;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9778.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9778.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9778.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9778.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9778.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9778.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9778.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9778.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9778.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9778.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9778.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9778.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 182.89.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.89.182_udp@PTR;check="$$(dig +tcp +noall +answer +search 182.89.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.89.182_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 00:53:48.126: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.129: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.132: INFO: Unable to read wheezy_udp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.134: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.136: INFO: Unable to read wheezy_udp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.141: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.144: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.171: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.174: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.177: INFO: Unable to read jessie_udp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.179: INFO: Unable to read jessie_tcp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.182: INFO: Unable to read jessie_udp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.184: INFO: Unable to read jessie_tcp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.186: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.188: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:48.204: INFO: Lookups using dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9778 wheezy_tcp@dns-test-service.dns-9778 wheezy_udp@dns-test-service.dns-9778.svc wheezy_tcp@dns-test-service.dns-9778.svc wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9778 jessie_tcp@dns-test-service.dns-9778 jessie_udp@dns-test-service.dns-9778.svc jessie_tcp@dns-test-service.dns-9778.svc jessie_udp@_http._tcp.dns-test-service.dns-9778.svc jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc] May 30 00:53:53.210: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.214: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.218: INFO: Unable to read wheezy_udp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.221: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.226: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.228: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.231: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.258: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.260: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.262: INFO: Unable to read jessie_udp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.264: INFO: Unable to read jessie_tcp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.267: INFO: Unable to read jessie_udp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.269: INFO: Unable to read jessie_tcp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.271: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.273: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:53.290: INFO: Lookups using dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9778 wheezy_tcp@dns-test-service.dns-9778 wheezy_udp@dns-test-service.dns-9778.svc wheezy_tcp@dns-test-service.dns-9778.svc wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9778 jessie_tcp@dns-test-service.dns-9778 jessie_udp@dns-test-service.dns-9778.svc jessie_tcp@dns-test-service.dns-9778.svc jessie_udp@_http._tcp.dns-test-service.dns-9778.svc jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc] May 30 00:53:58.208: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.212: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.217: INFO: Unable to read wheezy_udp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.220: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.226: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.229: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.233: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.254: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.257: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.260: INFO: Unable to read jessie_udp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.263: INFO: Unable to read jessie_tcp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.267: INFO: Unable to read jessie_udp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.270: INFO: Unable to read jessie_tcp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.273: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.276: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:53:58.296: INFO: Lookups using dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9778 wheezy_tcp@dns-test-service.dns-9778 wheezy_udp@dns-test-service.dns-9778.svc wheezy_tcp@dns-test-service.dns-9778.svc wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9778 jessie_tcp@dns-test-service.dns-9778 jessie_udp@dns-test-service.dns-9778.svc jessie_tcp@dns-test-service.dns-9778.svc jessie_udp@_http._tcp.dns-test-service.dns-9778.svc jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc] May 30 00:54:03.210: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.214: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.217: INFO: Unable to read wheezy_udp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.220: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.225: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.228: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.231: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.253: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.256: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.259: INFO: Unable to read jessie_udp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.262: INFO: Unable to read jessie_tcp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.265: INFO: Unable to read jessie_udp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.268: INFO: Unable to read jessie_tcp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.271: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.275: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:03.296: INFO: Lookups using dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9778 wheezy_tcp@dns-test-service.dns-9778 wheezy_udp@dns-test-service.dns-9778.svc wheezy_tcp@dns-test-service.dns-9778.svc wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9778 jessie_tcp@dns-test-service.dns-9778 jessie_udp@dns-test-service.dns-9778.svc jessie_tcp@dns-test-service.dns-9778.svc jessie_udp@_http._tcp.dns-test-service.dns-9778.svc jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc] May 30 00:54:08.210: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.214: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.219: INFO: Unable to read wheezy_udp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.222: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.225: INFO: Unable to read wheezy_udp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.228: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.231: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.234: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.258: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.262: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.265: INFO: Unable to read jessie_udp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.268: INFO: Unable to read jessie_tcp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.280: INFO: Unable to read jessie_udp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.283: INFO: Unable to read jessie_tcp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.286: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.289: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:08.306: INFO: Lookups using dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9778 wheezy_tcp@dns-test-service.dns-9778 wheezy_udp@dns-test-service.dns-9778.svc wheezy_tcp@dns-test-service.dns-9778.svc wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9778 jessie_tcp@dns-test-service.dns-9778 jessie_udp@dns-test-service.dns-9778.svc jessie_tcp@dns-test-service.dns-9778.svc jessie_udp@_http._tcp.dns-test-service.dns-9778.svc jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc] May 30 00:54:13.209: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.214: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.219: INFO: Unable to read wheezy_udp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.222: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.225: INFO: Unable to read wheezy_udp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.227: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.230: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.233: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.254: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.257: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.260: INFO: Unable to read jessie_udp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.262: INFO: Unable to read jessie_tcp@dns-test-service.dns-9778 from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.265: INFO: Unable to read jessie_udp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.268: INFO: Unable to read jessie_tcp@dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.271: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.278: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc from pod dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308: the server could not find the requested resource (get pods dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308) May 30 00:54:13.317: INFO: Lookups using dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9778 wheezy_tcp@dns-test-service.dns-9778 wheezy_udp@dns-test-service.dns-9778.svc wheezy_tcp@dns-test-service.dns-9778.svc wheezy_udp@_http._tcp.dns-test-service.dns-9778.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9778.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9778 jessie_tcp@dns-test-service.dns-9778 jessie_udp@dns-test-service.dns-9778.svc jessie_tcp@dns-test-service.dns-9778.svc jessie_udp@_http._tcp.dns-test-service.dns-9778.svc jessie_tcp@_http._tcp.dns-test-service.dns-9778.svc] May 30 00:54:18.304: INFO: DNS probes using dns-9778/dns-test-59c3a310-846f-4acf-8d47-9a24fc75f308 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:54:19.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9778" for this suite. • [SLOW TEST:37.802 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":236,"skipped":3809,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:54:19.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6990 STEP: creating a selector STEP: Creating the service pods in kubernetes May 30 00:54:19.491: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 30 00:54:19.588: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 30 00:54:21.601: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 30 00:54:23.592: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 30 00:54:25.592: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:54:27.592: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:54:29.592: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:54:31.593: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:54:33.593: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:54:35.593: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:54:37.614: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:54:39.596: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:54:41.592: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 00:54:43.592: INFO: The status of Pod netserver-0 is Running (Ready = true) May 30 00:54:43.599: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 30 00:54:49.667: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.222:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6990 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:54:49.667: INFO: >>> kubeConfig: /root/.kube/config I0530 00:54:49.704523 7 log.go:172] (0xc002a40630) (0xc0017cb9a0) Create stream I0530 00:54:49.704552 7 log.go:172] (0xc002a40630) (0xc0017cb9a0) Stream added, broadcasting: 1 I0530 00:54:49.706883 7 log.go:172] (0xc002a40630) Reply frame received for 1 I0530 00:54:49.706967 7 log.go:172] (0xc002a40630) (0xc0017cbae0) Create stream I0530 00:54:49.706986 7 log.go:172] (0xc002a40630) (0xc0017cbae0) Stream added, broadcasting: 3 I0530 00:54:49.708216 7 log.go:172] (0xc002a40630) Reply frame received for 3 I0530 00:54:49.708259 7 log.go:172] (0xc002a40630) (0xc0017cbb80) Create stream I0530 00:54:49.708278 7 log.go:172] (0xc002a40630) (0xc0017cbb80) Stream added, broadcasting: 5 I0530 00:54:49.709366 7 log.go:172] (0xc002a40630) Reply frame received for 5 I0530 00:54:49.803400 7 log.go:172] (0xc002a40630) Data frame received for 5 I0530 00:54:49.803423 7 log.go:172] (0xc0017cbb80) (5) Data frame handling I0530 00:54:49.803449 7 log.go:172] (0xc002a40630) Data frame received for 3 I0530 00:54:49.803497 7 log.go:172] (0xc0017cbae0) (3) Data frame handling I0530 00:54:49.803569 7 log.go:172] (0xc0017cbae0) (3) Data frame sent I0530 00:54:49.803596 7 log.go:172] (0xc002a40630) Data frame received for 3 I0530 00:54:49.803621 7 log.go:172] (0xc0017cbae0) (3) Data frame handling I0530 00:54:49.805536 7 log.go:172] (0xc002a40630) Data frame received for 1 I0530 00:54:49.805551 7 log.go:172] (0xc0017cb9a0) (1) Data frame handling I0530 00:54:49.805558 7 log.go:172] (0xc0017cb9a0) (1) Data frame sent I0530 00:54:49.805566 7 log.go:172] (0xc002a40630) (0xc0017cb9a0) Stream removed, broadcasting: 1 I0530 00:54:49.805576 7 log.go:172] (0xc002a40630) Go away received I0530 00:54:49.805743 7 log.go:172] (0xc002a40630) (0xc0017cb9a0) Stream removed, broadcasting: 1 I0530 00:54:49.805778 7 log.go:172] (0xc002a40630) (0xc0017cbae0) Stream removed, broadcasting: 3 I0530 00:54:49.805796 7 log.go:172] (0xc002a40630) (0xc0017cbb80) Stream removed, broadcasting: 5 May 30 00:54:49.805: INFO: Found all expected endpoints: [netserver-0] May 30 00:54:49.809: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.203:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6990 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 00:54:49.809: INFO: >>> kubeConfig: /root/.kube/config I0530 00:54:49.841697 7 log.go:172] (0xc002dad970) (0xc0017cbea0) Create stream I0530 00:54:49.841720 7 log.go:172] (0xc002dad970) (0xc0017cbea0) Stream added, broadcasting: 1 I0530 00:54:49.844153 7 log.go:172] (0xc002dad970) Reply frame received for 1 I0530 00:54:49.844194 7 log.go:172] (0xc002dad970) (0xc00151adc0) Create stream I0530 00:54:49.844206 7 log.go:172] (0xc002dad970) (0xc00151adc0) Stream added, broadcasting: 3 I0530 00:54:49.845550 7 log.go:172] (0xc002dad970) Reply frame received for 3 I0530 00:54:49.845604 7 log.go:172] (0xc002dad970) (0xc00151b040) Create stream I0530 00:54:49.845622 7 log.go:172] (0xc002dad970) (0xc00151b040) Stream added, broadcasting: 5 I0530 00:54:49.846757 7 log.go:172] (0xc002dad970) Reply frame received for 5 I0530 00:54:49.916478 7 log.go:172] (0xc002dad970) Data frame received for 3 I0530 00:54:49.916505 7 log.go:172] (0xc00151adc0) (3) Data frame handling I0530 00:54:49.916520 7 log.go:172] (0xc00151adc0) (3) Data frame sent I0530 00:54:49.916650 7 log.go:172] (0xc002dad970) Data frame received for 5 I0530 00:54:49.916681 7 log.go:172] (0xc00151b040) (5) Data frame handling I0530 00:54:49.916720 7 log.go:172] (0xc002dad970) Data frame received for 3 I0530 00:54:49.916751 7 log.go:172] (0xc00151adc0) (3) Data frame handling I0530 00:54:49.919256 7 log.go:172] (0xc002dad970) Data frame received for 1 I0530 00:54:49.919287 7 log.go:172] (0xc0017cbea0) (1) Data frame handling I0530 00:54:49.919302 7 log.go:172] (0xc0017cbea0) (1) Data frame sent I0530 00:54:49.919331 7 log.go:172] (0xc002dad970) (0xc0017cbea0) Stream removed, broadcasting: 1 I0530 00:54:49.919360 7 log.go:172] (0xc002dad970) Go away received I0530 00:54:49.919515 7 log.go:172] (0xc002dad970) (0xc0017cbea0) Stream removed, broadcasting: 1 I0530 00:54:49.919560 7 log.go:172] (0xc002dad970) (0xc00151adc0) Stream removed, broadcasting: 3 I0530 00:54:49.919600 7 log.go:172] (0xc002dad970) (0xc00151b040) Stream removed, broadcasting: 5 May 30 00:54:49.919: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:54:49.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6990" for this suite. • [SLOW TEST:30.501 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":3820,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:54:49.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 30 00:54:49.994: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix044365118/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:54:50.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4093" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":238,"skipped":3822,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:54:50.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 30 00:54:50.692: INFO: created pod pod-service-account-defaultsa May 30 00:54:50.692: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 30 00:54:50.745: INFO: created pod pod-service-account-mountsa May 30 00:54:50.746: INFO: pod pod-service-account-mountsa service account token volume mount: true May 30 00:54:50.779: INFO: created pod pod-service-account-nomountsa May 30 00:54:50.779: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 30 00:54:50.794: INFO: created pod pod-service-account-defaultsa-mountspec May 30 00:54:50.794: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 30 00:54:50.831: INFO: created pod pod-service-account-mountsa-mountspec May 30 00:54:50.831: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 30 00:54:50.879: INFO: created pod pod-service-account-nomountsa-mountspec May 30 00:54:50.879: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 30 00:54:50.909: INFO: created pod pod-service-account-defaultsa-nomountspec May 30 00:54:50.909: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 30 00:54:50.940: INFO: created pod pod-service-account-mountsa-nomountspec May 30 00:54:50.940: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 30 00:54:50.976: INFO: created pod pod-service-account-nomountsa-nomountspec May 30 00:54:50.976: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:54:50.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3348" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":239,"skipped":3825,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:54:51.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-08a0f0a4-1daa-4642-8bf4-103ed4f7bf66 STEP: Creating a pod to test consume secrets May 30 00:54:51.444: INFO: Waiting up to 5m0s for pod "pod-secrets-e0e0152a-5d48-4fdb-9c18-006c5a3ed3a3" in namespace "secrets-5009" to be "Succeeded or Failed" May 30 00:54:51.548: INFO: Pod "pod-secrets-e0e0152a-5d48-4fdb-9c18-006c5a3ed3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 103.704486ms May 30 00:54:54.129: INFO: Pod "pod-secrets-e0e0152a-5d48-4fdb-9c18-006c5a3ed3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.684474094s May 30 00:54:56.135: INFO: Pod "pod-secrets-e0e0152a-5d48-4fdb-9c18-006c5a3ed3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.690595366s May 30 00:54:58.339: INFO: Pod "pod-secrets-e0e0152a-5d48-4fdb-9c18-006c5a3ed3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.895057579s May 30 00:55:00.369: INFO: Pod "pod-secrets-e0e0152a-5d48-4fdb-9c18-006c5a3ed3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.924204276s May 30 00:55:02.530: INFO: Pod "pod-secrets-e0e0152a-5d48-4fdb-9c18-006c5a3ed3a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.086096184s STEP: Saw pod success May 30 00:55:02.531: INFO: Pod "pod-secrets-e0e0152a-5d48-4fdb-9c18-006c5a3ed3a3" satisfied condition "Succeeded or Failed" May 30 00:55:02.534: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e0e0152a-5d48-4fdb-9c18-006c5a3ed3a3 container secret-volume-test: STEP: delete the pod May 30 00:55:03.973: INFO: Waiting for pod pod-secrets-e0e0152a-5d48-4fdb-9c18-006c5a3ed3a3 to disappear May 30 00:55:04.000: INFO: Pod pod-secrets-e0e0152a-5d48-4fdb-9c18-006c5a3ed3a3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:55:04.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5009" for this suite. • [SLOW TEST:13.186 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":240,"skipped":3828,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:55:04.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:55:05.091: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eac3ad2a-b9cd-4286-94ae-9fbdbe6cf157" in namespace "projected-6573" to be "Succeeded or Failed" May 30 00:55:05.224: INFO: Pod "downwardapi-volume-eac3ad2a-b9cd-4286-94ae-9fbdbe6cf157": Phase="Pending", Reason="", readiness=false. Elapsed: 133.34056ms May 30 00:55:07.414: INFO: Pod "downwardapi-volume-eac3ad2a-b9cd-4286-94ae-9fbdbe6cf157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323252767s May 30 00:55:09.434: INFO: Pod "downwardapi-volume-eac3ad2a-b9cd-4286-94ae-9fbdbe6cf157": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343250167s May 30 00:55:11.439: INFO: Pod "downwardapi-volume-eac3ad2a-b9cd-4286-94ae-9fbdbe6cf157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.347884949s STEP: Saw pod success May 30 00:55:11.439: INFO: Pod "downwardapi-volume-eac3ad2a-b9cd-4286-94ae-9fbdbe6cf157" satisfied condition "Succeeded or Failed" May 30 00:55:11.442: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-eac3ad2a-b9cd-4286-94ae-9fbdbe6cf157 container client-container: STEP: delete the pod May 30 00:55:11.484: INFO: Waiting for pod downwardapi-volume-eac3ad2a-b9cd-4286-94ae-9fbdbe6cf157 to disappear May 30 00:55:11.494: INFO: Pod downwardapi-volume-eac3ad2a-b9cd-4286-94ae-9fbdbe6cf157 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:55:11.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6573" for this suite. • [SLOW TEST:7.083 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":241,"skipped":3843,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:55:11.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:55:27.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-847" for this suite. • [SLOW TEST:16.093 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":242,"skipped":3881,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:55:27.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-081ce6f6-6f00-4e0c-a93a-841172b53fc2 STEP: Creating a pod to test consume configMaps May 30 00:55:27.675: INFO: Waiting up to 5m0s for pod "pod-configmaps-b4e72975-c7f6-4e28-9847-374bc1ec0180" in namespace "configmap-272" to be "Succeeded or Failed" May 30 00:55:27.734: INFO: Pod "pod-configmaps-b4e72975-c7f6-4e28-9847-374bc1ec0180": Phase="Pending", Reason="", readiness=false. Elapsed: 58.655063ms May 30 00:55:29.738: INFO: Pod "pod-configmaps-b4e72975-c7f6-4e28-9847-374bc1ec0180": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062860631s May 30 00:55:31.742: INFO: Pod "pod-configmaps-b4e72975-c7f6-4e28-9847-374bc1ec0180": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067253958s STEP: Saw pod success May 30 00:55:31.742: INFO: Pod "pod-configmaps-b4e72975-c7f6-4e28-9847-374bc1ec0180" satisfied condition "Succeeded or Failed" May 30 00:55:31.746: INFO: Trying to get logs from node latest-worker pod pod-configmaps-b4e72975-c7f6-4e28-9847-374bc1ec0180 container configmap-volume-test: STEP: delete the pod May 30 00:55:31.951: INFO: Waiting for pod pod-configmaps-b4e72975-c7f6-4e28-9847-374bc1ec0180 to disappear May 30 00:55:32.003: INFO: Pod pod-configmaps-b4e72975-c7f6-4e28-9847-374bc1ec0180 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:55:32.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-272" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":243,"skipped":3907,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:55:32.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 30 00:55:32.179: INFO: Waiting up to 5m0s for pod "pod-a7c45544-ba00-410c-8bee-1ebac81c5108" in namespace "emptydir-2862" to be "Succeeded or Failed" May 30 00:55:32.191: INFO: Pod "pod-a7c45544-ba00-410c-8bee-1ebac81c5108": Phase="Pending", Reason="", readiness=false. Elapsed: 11.905874ms May 30 00:55:34.194: INFO: Pod "pod-a7c45544-ba00-410c-8bee-1ebac81c5108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015591048s May 30 00:55:36.198: INFO: Pod "pod-a7c45544-ba00-410c-8bee-1ebac81c5108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01879907s STEP: Saw pod success May 30 00:55:36.198: INFO: Pod "pod-a7c45544-ba00-410c-8bee-1ebac81c5108" satisfied condition "Succeeded or Failed" May 30 00:55:36.200: INFO: Trying to get logs from node latest-worker pod pod-a7c45544-ba00-410c-8bee-1ebac81c5108 container test-container: STEP: delete the pod May 30 00:55:36.228: INFO: Waiting for pod pod-a7c45544-ba00-410c-8bee-1ebac81c5108 to disappear May 30 00:55:36.278: INFO: Pod pod-a7c45544-ba00-410c-8bee-1ebac81c5108 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:55:36.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2862" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":244,"skipped":3936,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:55:36.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:55:36.351: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 30 00:55:39.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1930 create -f -' May 30 00:55:42.620: INFO: stderr: "" May 30 00:55:42.620: INFO: stdout: "e2e-test-crd-publish-openapi-9213-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 30 00:55:42.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1930 delete e2e-test-crd-publish-openapi-9213-crds test-cr' May 30 00:55:42.750: INFO: stderr: "" May 30 00:55:42.750: INFO: stdout: "e2e-test-crd-publish-openapi-9213-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 30 00:55:42.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1930 apply -f -' May 30 00:55:43.014: INFO: stderr: "" May 30 00:55:43.014: INFO: stdout: "e2e-test-crd-publish-openapi-9213-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 30 00:55:43.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1930 delete e2e-test-crd-publish-openapi-9213-crds test-cr' May 30 00:55:43.120: INFO: stderr: "" May 30 00:55:43.120: INFO: stdout: "e2e-test-crd-publish-openapi-9213-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 30 00:55:43.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9213-crds' May 30 00:55:43.399: INFO: stderr: "" May 30 00:55:43.399: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9213-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:55:45.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1930" for this suite. • [SLOW TEST:9.018 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":245,"skipped":3952,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:55:45.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-809 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-809 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-809 May 30 00:55:45.428: INFO: Found 0 stateful pods, waiting for 1 May 30 00:55:55.433: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 30 00:55:55.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-809 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 00:55:55.737: INFO: stderr: "I0530 00:55:55.575637 3267 log.go:172] (0xc000abd340) (0xc000aa2320) Create stream\nI0530 00:55:55.575706 3267 log.go:172] (0xc000abd340) (0xc000aa2320) Stream added, broadcasting: 1\nI0530 00:55:55.586862 3267 log.go:172] (0xc000abd340) Reply frame received for 1\nI0530 00:55:55.586925 3267 log.go:172] (0xc000abd340) (0xc00073cdc0) Create stream\nI0530 00:55:55.586945 3267 log.go:172] (0xc000abd340) (0xc00073cdc0) Stream added, broadcasting: 3\nI0530 00:55:55.588639 3267 log.go:172] (0xc000abd340) Reply frame received for 3\nI0530 00:55:55.588683 3267 log.go:172] (0xc000abd340) (0xc000582140) Create stream\nI0530 00:55:55.588695 3267 log.go:172] (0xc000abd340) (0xc000582140) Stream added, broadcasting: 5\nI0530 00:55:55.590944 3267 log.go:172] (0xc000abd340) Reply frame received for 5\nI0530 00:55:55.647178 3267 log.go:172] (0xc000abd340) Data frame received for 5\nI0530 00:55:55.647196 3267 log.go:172] (0xc000582140) (5) Data frame handling\nI0530 00:55:55.647207 3267 log.go:172] (0xc000582140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 00:55:55.728323 3267 log.go:172] (0xc000abd340) Data frame received for 3\nI0530 00:55:55.728380 3267 log.go:172] (0xc00073cdc0) (3) Data frame handling\nI0530 00:55:55.728399 3267 log.go:172] (0xc00073cdc0) (3) Data frame sent\nI0530 00:55:55.728444 3267 log.go:172] (0xc000abd340) Data frame received for 5\nI0530 00:55:55.728459 3267 log.go:172] (0xc000582140) (5) Data frame handling\nI0530 00:55:55.729015 3267 log.go:172] (0xc000abd340) Data frame received for 3\nI0530 00:55:55.729039 3267 log.go:172] (0xc00073cdc0) (3) Data frame handling\nI0530 00:55:55.730657 3267 log.go:172] (0xc000abd340) Data frame received for 1\nI0530 00:55:55.730689 3267 log.go:172] (0xc000aa2320) (1) Data frame handling\nI0530 00:55:55.730710 3267 log.go:172] (0xc000aa2320) (1) Data frame sent\nI0530 00:55:55.730737 3267 log.go:172] (0xc000abd340) (0xc000aa2320) Stream removed, broadcasting: 1\nI0530 00:55:55.730770 3267 log.go:172] (0xc000abd340) Go away received\nI0530 00:55:55.731375 3267 log.go:172] (0xc000abd340) (0xc000aa2320) Stream removed, broadcasting: 1\nI0530 00:55:55.731396 3267 log.go:172] (0xc000abd340) (0xc00073cdc0) Stream removed, broadcasting: 3\nI0530 00:55:55.731407 3267 log.go:172] (0xc000abd340) (0xc000582140) Stream removed, broadcasting: 5\n" May 30 00:55:55.737: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 00:55:55.737: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 00:55:55.741: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 30 00:56:05.746: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 30 00:56:05.746: INFO: Waiting for statefulset status.replicas updated to 0 May 30 00:56:05.818: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999412s May 30 00:56:06.824: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.938239112s May 30 00:56:07.828: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.932058905s May 30 00:56:08.832: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.927892498s May 30 00:56:09.856: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.923504895s May 30 00:56:10.884: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.900307797s May 30 00:56:11.887: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.872526327s May 30 00:56:12.892: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.869193719s May 30 00:56:13.896: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.864143376s May 30 00:56:14.939: INFO: Verifying statefulset ss doesn't scale past 1 for another 859.838307ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-809 May 30 00:56:15.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-809 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 00:56:16.175: INFO: stderr: "I0530 00:56:16.092033 3287 log.go:172] (0xc000b71130) (0xc0006b65a0) Create stream\nI0530 00:56:16.092097 3287 log.go:172] (0xc000b71130) (0xc0006b65a0) Stream added, broadcasting: 1\nI0530 00:56:16.096226 3287 log.go:172] (0xc000b71130) Reply frame received for 1\nI0530 00:56:16.096279 3287 log.go:172] (0xc000b71130) (0xc0000ebae0) Create stream\nI0530 00:56:16.096295 3287 log.go:172] (0xc000b71130) (0xc0000ebae0) Stream added, broadcasting: 3\nI0530 00:56:16.097396 3287 log.go:172] (0xc000b71130) Reply frame received for 3\nI0530 00:56:16.097439 3287 log.go:172] (0xc000b71130) (0xc00069f4a0) Create stream\nI0530 00:56:16.097467 3287 log.go:172] (0xc000b71130) (0xc00069f4a0) Stream added, broadcasting: 5\nI0530 00:56:16.098394 3287 log.go:172] (0xc000b71130) Reply frame received for 5\nI0530 00:56:16.168967 3287 log.go:172] (0xc000b71130) Data frame received for 3\nI0530 00:56:16.169002 3287 log.go:172] (0xc0000ebae0) (3) Data frame handling\nI0530 00:56:16.169010 3287 log.go:172] (0xc0000ebae0) (3) Data frame sent\nI0530 00:56:16.169029 3287 log.go:172] (0xc000b71130) Data frame received for 5\nI0530 00:56:16.169061 3287 log.go:172] (0xc00069f4a0) (5) Data frame handling\nI0530 00:56:16.169071 3287 log.go:172] (0xc00069f4a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 00:56:16.169086 3287 log.go:172] (0xc000b71130) Data frame received for 5\nI0530 00:56:16.169256 3287 log.go:172] (0xc00069f4a0) (5) Data frame handling\nI0530 00:56:16.169279 3287 log.go:172] (0xc000b71130) Data frame received for 3\nI0530 00:56:16.169290 3287 log.go:172] (0xc0000ebae0) (3) Data frame handling\nI0530 00:56:16.170319 3287 log.go:172] (0xc000b71130) Data frame received for 1\nI0530 00:56:16.170336 3287 log.go:172] (0xc0006b65a0) (1) Data frame handling\nI0530 00:56:16.170354 3287 log.go:172] (0xc0006b65a0) (1) Data frame sent\nI0530 00:56:16.170371 3287 log.go:172] (0xc000b71130) (0xc0006b65a0) Stream removed, broadcasting: 1\nI0530 00:56:16.170390 3287 log.go:172] (0xc000b71130) Go away received\nI0530 00:56:16.170645 3287 log.go:172] (0xc000b71130) (0xc0006b65a0) Stream removed, broadcasting: 1\nI0530 00:56:16.170659 3287 log.go:172] (0xc000b71130) (0xc0000ebae0) Stream removed, broadcasting: 3\nI0530 00:56:16.170666 3287 log.go:172] (0xc000b71130) (0xc00069f4a0) Stream removed, broadcasting: 5\n" May 30 00:56:16.175: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 00:56:16.175: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 00:56:16.182: INFO: Found 1 stateful pods, waiting for 3 May 30 00:56:26.213: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 30 00:56:26.213: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 30 00:56:26.213: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 30 00:56:26.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-809 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 00:56:26.492: INFO: stderr: "I0530 00:56:26.375813 3307 log.go:172] (0xc0006749a0) (0xc000593900) Create stream\nI0530 00:56:26.375885 3307 log.go:172] (0xc0006749a0) (0xc000593900) Stream added, broadcasting: 1\nI0530 00:56:26.379353 3307 log.go:172] (0xc0006749a0) Reply frame received for 1\nI0530 00:56:26.379405 3307 log.go:172] (0xc0006749a0) (0xc0006ae500) Create stream\nI0530 00:56:26.379419 3307 log.go:172] (0xc0006749a0) (0xc0006ae500) Stream added, broadcasting: 3\nI0530 00:56:26.380510 3307 log.go:172] (0xc0006749a0) Reply frame received for 3\nI0530 00:56:26.380541 3307 log.go:172] (0xc0006749a0) (0xc000327180) Create stream\nI0530 00:56:26.380554 3307 log.go:172] (0xc0006749a0) (0xc000327180) Stream added, broadcasting: 5\nI0530 00:56:26.381866 3307 log.go:172] (0xc0006749a0) Reply frame received for 5\nI0530 00:56:26.486437 3307 log.go:172] (0xc0006749a0) Data frame received for 5\nI0530 00:56:26.486671 3307 log.go:172] (0xc000327180) (5) Data frame handling\nI0530 00:56:26.486710 3307 log.go:172] (0xc000327180) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 00:56:26.486743 3307 log.go:172] (0xc0006749a0) Data frame received for 3\nI0530 00:56:26.486762 3307 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0530 00:56:26.486780 3307 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0530 00:56:26.486804 3307 log.go:172] (0xc0006749a0) Data frame received for 3\nI0530 00:56:26.486831 3307 log.go:172] (0xc0006749a0) Data frame received for 5\nI0530 00:56:26.486856 3307 log.go:172] (0xc000327180) (5) Data frame handling\nI0530 00:56:26.486883 3307 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0530 00:56:26.488537 3307 log.go:172] (0xc0006749a0) Data frame received for 1\nI0530 00:56:26.488557 3307 log.go:172] (0xc000593900) (1) Data frame handling\nI0530 00:56:26.488567 3307 log.go:172] (0xc000593900) (1) Data frame sent\nI0530 00:56:26.488583 3307 log.go:172] (0xc0006749a0) (0xc000593900) Stream removed, broadcasting: 1\nI0530 00:56:26.488632 3307 log.go:172] (0xc0006749a0) Go away received\nI0530 00:56:26.488893 3307 log.go:172] (0xc0006749a0) (0xc000593900) Stream removed, broadcasting: 1\nI0530 00:56:26.488907 3307 log.go:172] (0xc0006749a0) (0xc0006ae500) Stream removed, broadcasting: 3\nI0530 00:56:26.488913 3307 log.go:172] (0xc0006749a0) (0xc000327180) Stream removed, broadcasting: 5\n" May 30 00:56:26.492: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 00:56:26.492: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 00:56:26.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-809 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 00:56:26.726: INFO: stderr: "I0530 00:56:26.622832 3328 log.go:172] (0xc000986000) (0xc0005388c0) Create stream\nI0530 00:56:26.622919 3328 log.go:172] (0xc000986000) (0xc0005388c0) Stream added, broadcasting: 1\nI0530 00:56:26.626349 3328 log.go:172] (0xc000986000) Reply frame received for 1\nI0530 00:56:26.626395 3328 log.go:172] (0xc000986000) (0xc000538f00) Create stream\nI0530 00:56:26.626411 3328 log.go:172] (0xc000986000) (0xc000538f00) Stream added, broadcasting: 3\nI0530 00:56:26.627313 3328 log.go:172] (0xc000986000) Reply frame received for 3\nI0530 00:56:26.627358 3328 log.go:172] (0xc000986000) (0xc00031f180) Create stream\nI0530 00:56:26.627372 3328 log.go:172] (0xc000986000) (0xc00031f180) Stream added, broadcasting: 5\nI0530 00:56:26.628214 3328 log.go:172] (0xc000986000) Reply frame received for 5\nI0530 00:56:26.692704 3328 log.go:172] (0xc000986000) Data frame received for 5\nI0530 00:56:26.692731 3328 log.go:172] (0xc00031f180) (5) Data frame handling\nI0530 00:56:26.692747 3328 log.go:172] (0xc00031f180) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 00:56:26.717577 3328 log.go:172] (0xc000986000) Data frame received for 3\nI0530 00:56:26.717616 3328 log.go:172] (0xc000538f00) (3) Data frame handling\nI0530 00:56:26.717652 3328 log.go:172] (0xc000538f00) (3) Data frame sent\nI0530 00:56:26.717907 3328 log.go:172] (0xc000986000) Data frame received for 5\nI0530 00:56:26.717933 3328 log.go:172] (0xc00031f180) (5) Data frame handling\nI0530 00:56:26.717951 3328 log.go:172] (0xc000986000) Data frame received for 3\nI0530 00:56:26.717965 3328 log.go:172] (0xc000538f00) (3) Data frame handling\nI0530 00:56:26.719605 3328 log.go:172] (0xc000986000) Data frame received for 1\nI0530 00:56:26.719638 3328 log.go:172] (0xc0005388c0) (1) Data frame handling\nI0530 00:56:26.719677 3328 log.go:172] (0xc0005388c0) (1) Data frame sent\nI0530 00:56:26.719710 3328 log.go:172] (0xc000986000) (0xc0005388c0) Stream removed, broadcasting: 1\nI0530 00:56:26.719757 3328 log.go:172] (0xc000986000) Go away received\nI0530 00:56:26.720236 3328 log.go:172] (0xc000986000) (0xc0005388c0) Stream removed, broadcasting: 1\nI0530 00:56:26.720261 3328 log.go:172] (0xc000986000) (0xc000538f00) Stream removed, broadcasting: 3\nI0530 00:56:26.720274 3328 log.go:172] (0xc000986000) (0xc00031f180) Stream removed, broadcasting: 5\n" May 30 00:56:26.726: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 00:56:26.726: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 00:56:26.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-809 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 30 00:56:26.995: INFO: stderr: "I0530 00:56:26.858146 3349 log.go:172] (0xc0009e0fd0) (0xc000afc3c0) Create stream\nI0530 00:56:26.858215 3349 log.go:172] (0xc0009e0fd0) (0xc000afc3c0) Stream added, broadcasting: 1\nI0530 00:56:26.863365 3349 log.go:172] (0xc0009e0fd0) Reply frame received for 1\nI0530 00:56:26.863513 3349 log.go:172] (0xc0009e0fd0) (0xc00056c5a0) Create stream\nI0530 00:56:26.863539 3349 log.go:172] (0xc0009e0fd0) (0xc00056c5a0) Stream added, broadcasting: 3\nI0530 00:56:26.864551 3349 log.go:172] (0xc0009e0fd0) Reply frame received for 3\nI0530 00:56:26.864580 3349 log.go:172] (0xc0009e0fd0) (0xc000474dc0) Create stream\nI0530 00:56:26.864594 3349 log.go:172] (0xc0009e0fd0) (0xc000474dc0) Stream added, broadcasting: 5\nI0530 00:56:26.865982 3349 log.go:172] (0xc0009e0fd0) Reply frame received for 5\nI0530 00:56:26.935029 3349 log.go:172] (0xc0009e0fd0) Data frame received for 5\nI0530 00:56:26.935053 3349 log.go:172] (0xc000474dc0) (5) Data frame handling\nI0530 00:56:26.935066 3349 log.go:172] (0xc000474dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0530 00:56:26.985896 3349 log.go:172] (0xc0009e0fd0) Data frame received for 3\nI0530 00:56:26.985941 3349 log.go:172] (0xc00056c5a0) (3) Data frame handling\nI0530 00:56:26.985979 3349 log.go:172] (0xc00056c5a0) (3) Data frame sent\nI0530 00:56:26.985997 3349 log.go:172] (0xc0009e0fd0) Data frame received for 3\nI0530 00:56:26.986015 3349 log.go:172] (0xc00056c5a0) (3) Data frame handling\nI0530 00:56:26.986038 3349 log.go:172] (0xc0009e0fd0) Data frame received for 5\nI0530 00:56:26.986056 3349 log.go:172] (0xc000474dc0) (5) Data frame handling\nI0530 00:56:26.988166 3349 log.go:172] (0xc0009e0fd0) Data frame received for 1\nI0530 00:56:26.988185 3349 log.go:172] (0xc000afc3c0) (1) Data frame handling\nI0530 00:56:26.988198 3349 log.go:172] (0xc000afc3c0) (1) Data frame sent\nI0530 00:56:26.988206 3349 log.go:172] (0xc0009e0fd0) (0xc000afc3c0) Stream removed, broadcasting: 1\nI0530 00:56:26.988223 3349 log.go:172] (0xc0009e0fd0) Go away received\nI0530 00:56:26.988673 3349 log.go:172] (0xc0009e0fd0) (0xc000afc3c0) Stream removed, broadcasting: 1\nI0530 00:56:26.988714 3349 log.go:172] (0xc0009e0fd0) (0xc00056c5a0) Stream removed, broadcasting: 3\nI0530 00:56:26.988750 3349 log.go:172] (0xc0009e0fd0) (0xc000474dc0) Stream removed, broadcasting: 5\n" May 30 00:56:26.995: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 30 00:56:26.995: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 30 00:56:26.995: INFO: Waiting for statefulset status.replicas updated to 0 May 30 00:56:27.003: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 30 00:56:37.012: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 30 00:56:37.012: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 30 00:56:37.012: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 30 00:56:37.032: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999564s May 30 00:56:38.040: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987749714s May 30 00:56:39.046: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979285707s May 30 00:56:40.055: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.97405496s May 30 00:56:41.060: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.964578315s May 30 00:56:42.065: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.959302882s May 30 00:56:43.070: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.954603753s May 30 00:56:44.075: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.949887275s May 30 00:56:45.080: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.945027702s May 30 00:56:46.085: INFO: Verifying statefulset ss doesn't scale past 3 for another 939.669822ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-809 May 30 00:56:47.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-809 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 00:56:47.339: INFO: stderr: "I0530 00:56:47.230746 3371 log.go:172] (0xc000c22c60) (0xc000b5a5a0) Create stream\nI0530 00:56:47.230807 3371 log.go:172] (0xc000c22c60) (0xc000b5a5a0) Stream added, broadcasting: 1\nI0530 00:56:47.235700 3371 log.go:172] (0xc000c22c60) Reply frame received for 1\nI0530 00:56:47.235737 3371 log.go:172] (0xc000c22c60) (0xc00084af00) Create stream\nI0530 00:56:47.235747 3371 log.go:172] (0xc000c22c60) (0xc00084af00) Stream added, broadcasting: 3\nI0530 00:56:47.236492 3371 log.go:172] (0xc000c22c60) Reply frame received for 3\nI0530 00:56:47.236525 3371 log.go:172] (0xc000c22c60) (0xc00062a280) Create stream\nI0530 00:56:47.236538 3371 log.go:172] (0xc000c22c60) (0xc00062a280) Stream added, broadcasting: 5\nI0530 00:56:47.237584 3371 log.go:172] (0xc000c22c60) Reply frame received for 5\nI0530 00:56:47.330137 3371 log.go:172] (0xc000c22c60) Data frame received for 5\nI0530 00:56:47.330177 3371 log.go:172] (0xc00062a280) (5) Data frame handling\nI0530 00:56:47.330186 3371 log.go:172] (0xc00062a280) (5) Data frame sent\nI0530 00:56:47.330191 3371 log.go:172] (0xc000c22c60) Data frame received for 5\nI0530 00:56:47.330196 3371 log.go:172] (0xc00062a280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 00:56:47.330211 3371 log.go:172] (0xc000c22c60) Data frame received for 3\nI0530 00:56:47.330216 3371 log.go:172] (0xc00084af00) (3) Data frame handling\nI0530 00:56:47.330225 3371 log.go:172] (0xc00084af00) (3) Data frame sent\nI0530 00:56:47.330236 3371 log.go:172] (0xc000c22c60) Data frame received for 3\nI0530 00:56:47.330242 3371 log.go:172] (0xc00084af00) (3) Data frame handling\nI0530 00:56:47.331390 3371 log.go:172] (0xc000c22c60) Data frame received for 1\nI0530 00:56:47.331400 3371 log.go:172] (0xc000b5a5a0) (1) Data frame handling\nI0530 00:56:47.331406 3371 log.go:172] (0xc000b5a5a0) (1) Data frame sent\nI0530 00:56:47.331631 3371 log.go:172] (0xc000c22c60) (0xc000b5a5a0) Stream removed, broadcasting: 1\nI0530 00:56:47.331731 3371 log.go:172] (0xc000c22c60) Go away received\nI0530 00:56:47.331925 3371 log.go:172] (0xc000c22c60) (0xc000b5a5a0) Stream removed, broadcasting: 1\nI0530 00:56:47.331941 3371 log.go:172] (0xc000c22c60) (0xc00084af00) Stream removed, broadcasting: 3\nI0530 00:56:47.331950 3371 log.go:172] (0xc000c22c60) (0xc00062a280) Stream removed, broadcasting: 5\n" May 30 00:56:47.339: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 00:56:47.339: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 00:56:47.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-809 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 00:56:47.559: INFO: stderr: "I0530 00:56:47.463017 3392 log.go:172] (0xc0009c56b0) (0xc000b94460) Create stream\nI0530 00:56:47.463088 3392 log.go:172] (0xc0009c56b0) (0xc000b94460) Stream added, broadcasting: 1\nI0530 00:56:47.466804 3392 log.go:172] (0xc0009c56b0) Reply frame received for 1\nI0530 00:56:47.466844 3392 log.go:172] (0xc0009c56b0) (0xc00084ef00) Create stream\nI0530 00:56:47.466855 3392 log.go:172] (0xc0009c56b0) (0xc00084ef00) Stream added, broadcasting: 3\nI0530 00:56:47.467522 3392 log.go:172] (0xc0009c56b0) Reply frame received for 3\nI0530 00:56:47.467560 3392 log.go:172] (0xc0009c56b0) (0xc000510280) Create stream\nI0530 00:56:47.467576 3392 log.go:172] (0xc0009c56b0) (0xc000510280) Stream added, broadcasting: 5\nI0530 00:56:47.468285 3392 log.go:172] (0xc0009c56b0) Reply frame received for 5\nI0530 00:56:47.554715 3392 log.go:172] (0xc0009c56b0) Data frame received for 5\nI0530 00:56:47.554738 3392 log.go:172] (0xc000510280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 00:56:47.554757 3392 log.go:172] (0xc0009c56b0) Data frame received for 3\nI0530 00:56:47.554790 3392 log.go:172] (0xc00084ef00) (3) Data frame handling\nI0530 00:56:47.554803 3392 log.go:172] (0xc00084ef00) (3) Data frame sent\nI0530 00:56:47.554810 3392 log.go:172] (0xc0009c56b0) Data frame received for 3\nI0530 00:56:47.554818 3392 log.go:172] (0xc00084ef00) (3) Data frame handling\nI0530 00:56:47.554843 3392 log.go:172] (0xc000510280) (5) Data frame sent\nI0530 00:56:47.554850 3392 log.go:172] (0xc0009c56b0) Data frame received for 5\nI0530 00:56:47.554855 3392 log.go:172] (0xc000510280) (5) Data frame handling\nI0530 00:56:47.555945 3392 log.go:172] (0xc0009c56b0) Data frame received for 1\nI0530 00:56:47.555959 3392 log.go:172] (0xc000b94460) (1) Data frame handling\nI0530 00:56:47.555967 3392 log.go:172] (0xc000b94460) (1) Data frame sent\nI0530 00:56:47.555976 3392 log.go:172] (0xc0009c56b0) (0xc000b94460) Stream removed, broadcasting: 1\nI0530 00:56:47.555993 3392 log.go:172] (0xc0009c56b0) Go away received\nI0530 00:56:47.556405 3392 log.go:172] (0xc0009c56b0) (0xc000b94460) Stream removed, broadcasting: 1\nI0530 00:56:47.556424 3392 log.go:172] (0xc0009c56b0) (0xc00084ef00) Stream removed, broadcasting: 3\nI0530 00:56:47.556434 3392 log.go:172] (0xc0009c56b0) (0xc000510280) Stream removed, broadcasting: 5\n" May 30 00:56:47.560: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 00:56:47.560: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 00:56:47.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-809 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 30 00:56:47.780: INFO: stderr: "I0530 00:56:47.694047 3414 log.go:172] (0xc0000e0370) (0xc000452820) Create stream\nI0530 00:56:47.694146 3414 log.go:172] (0xc0000e0370) (0xc000452820) Stream added, broadcasting: 1\nI0530 00:56:47.695917 3414 log.go:172] (0xc0000e0370) Reply frame received for 1\nI0530 00:56:47.695942 3414 log.go:172] (0xc0000e0370) (0xc000260500) Create stream\nI0530 00:56:47.695951 3414 log.go:172] (0xc0000e0370) (0xc000260500) Stream added, broadcasting: 3\nI0530 00:56:47.697281 3414 log.go:172] (0xc0000e0370) Reply frame received for 3\nI0530 00:56:47.697341 3414 log.go:172] (0xc0000e0370) (0xc00015dea0) Create stream\nI0530 00:56:47.697365 3414 log.go:172] (0xc0000e0370) (0xc00015dea0) Stream added, broadcasting: 5\nI0530 00:56:47.698443 3414 log.go:172] (0xc0000e0370) Reply frame received for 5\nI0530 00:56:47.771562 3414 log.go:172] (0xc0000e0370) Data frame received for 3\nI0530 00:56:47.771624 3414 log.go:172] (0xc000260500) (3) Data frame handling\nI0530 00:56:47.771648 3414 log.go:172] (0xc000260500) (3) Data frame sent\nI0530 00:56:47.771676 3414 log.go:172] (0xc0000e0370) Data frame received for 3\nI0530 00:56:47.771689 3414 log.go:172] (0xc000260500) (3) Data frame handling\nI0530 00:56:47.771720 3414 log.go:172] (0xc0000e0370) Data frame received for 5\nI0530 00:56:47.771754 3414 log.go:172] (0xc00015dea0) (5) Data frame handling\nI0530 00:56:47.771776 3414 log.go:172] (0xc00015dea0) (5) Data frame sent\nI0530 00:56:47.771795 3414 log.go:172] (0xc0000e0370) Data frame received for 5\nI0530 00:56:47.771810 3414 log.go:172] (0xc00015dea0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0530 00:56:47.773643 3414 log.go:172] (0xc0000e0370) Data frame received for 1\nI0530 00:56:47.773692 3414 log.go:172] (0xc000452820) (1) Data frame handling\nI0530 00:56:47.773727 3414 log.go:172] (0xc000452820) (1) Data frame sent\nI0530 00:56:47.773767 3414 log.go:172] (0xc0000e0370) (0xc000452820) Stream removed, broadcasting: 1\nI0530 00:56:47.773802 3414 log.go:172] (0xc0000e0370) Go away received\nI0530 00:56:47.774292 3414 log.go:172] (0xc0000e0370) (0xc000452820) Stream removed, broadcasting: 1\nI0530 00:56:47.774321 3414 log.go:172] (0xc0000e0370) (0xc000260500) Stream removed, broadcasting: 3\nI0530 00:56:47.774334 3414 log.go:172] (0xc0000e0370) (0xc00015dea0) Stream removed, broadcasting: 5\n" May 30 00:56:47.780: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 30 00:56:47.780: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 30 00:56:47.780: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 30 00:57:07.801: INFO: Deleting all statefulset in ns statefulset-809 May 30 00:57:07.804: INFO: Scaling statefulset ss to 0 May 30 00:57:07.814: INFO: Waiting for statefulset status.replicas updated to 0 May 30 00:57:07.817: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:57:07.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-809" for this suite. • [SLOW TEST:82.517 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":246,"skipped":3952,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:57:07.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:57:07.957: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 30 00:57:12.964: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 30 00:57:12.964: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 30 00:57:14.968: INFO: Creating deployment "test-rollover-deployment" May 30 00:57:15.007: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 30 00:57:17.196: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 30 00:57:17.203: INFO: Ensure that both replica sets have 1 created replica May 30 00:57:17.209: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 30 00:57:17.217: INFO: Updating deployment test-rollover-deployment May 30 00:57:17.217: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 30 00:57:19.253: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 30 00:57:19.259: INFO: Make sure deployment "test-rollover-deployment" is complete May 30 00:57:19.265: INFO: all replica sets need to contain the pod-template-hash label May 30 00:57:19.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397037, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 00:57:21.274: INFO: all replica sets need to contain the pod-template-hash label May 30 00:57:21.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397040, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 00:57:23.274: INFO: all replica sets need to contain the pod-template-hash label May 30 00:57:23.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397040, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 00:57:25.272: INFO: all replica sets need to contain the pod-template-hash label May 30 00:57:25.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397040, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 00:57:27.274: INFO: all replica sets need to contain the pod-template-hash label May 30 00:57:27.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397040, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 00:57:29.275: INFO: all replica sets need to contain the pod-template-hash label May 30 00:57:29.275: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397040, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397035, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 30 00:57:31.566: INFO: May 30 00:57:31.566: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 30 00:57:31.574: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3134 /apis/apps/v1/namespaces/deployment-3134/deployments/test-rollover-deployment 49f78bab-7705-42f2-a4f6-27974c4d1e35 8754163 2 2020-05-30 00:57:14 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-30 00:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-30 00:57:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f86098 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-30 00:57:15 +0000 UTC,LastTransitionTime:2020-05-30 00:57:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-30 00:57:31 +0000 UTC,LastTransitionTime:2020-05-30 00:57:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 30 00:57:31.577: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-3134 /apis/apps/v1/namespaces/deployment-3134/replicasets/test-rollover-deployment-7c4fd9c879 9e668ec2-16bd-4fbe-a6b3-8316552f25bc 8754152 2 2020-05-30 00:57:17 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 49f78bab-7705-42f2-a4f6-27974c4d1e35 0xc003f86817 0xc003f86818}] [] [{kube-controller-manager Update apps/v1 2020-05-30 00:57:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49f78bab-7705-42f2-a4f6-27974c4d1e35\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f868d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 30 00:57:31.577: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 30 00:57:31.577: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3134 /apis/apps/v1/namespaces/deployment-3134/replicasets/test-rollover-controller 5e598d20-055e-4c4c-8a11-d545b38d2401 8754162 2 2020-05-30 00:57:07 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 49f78bab-7705-42f2-a4f6-27974c4d1e35 0xc003f8656f 0xc003f86580}] [] [{e2e.test Update apps/v1 2020-05-30 00:57:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-30 00:57:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49f78bab-7705-42f2-a4f6-27974c4d1e35\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003f86648 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 00:57:31.577: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-3134 /apis/apps/v1/namespaces/deployment-3134/replicasets/test-rollover-deployment-5686c4cfd5 cbd7d7fe-d33b-481a-af88-f568e5c49dbd 8754103 2 2020-05-30 00:57:15 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 49f78bab-7705-42f2-a4f6-27974c4d1e35 0xc003f866d7 0xc003f866d8}] [] [{kube-controller-manager Update apps/v1 2020-05-30 00:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49f78bab-7705-42f2-a4f6-27974c4d1e35\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f86778 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 30 00:57:31.581: INFO: Pod "test-rollover-deployment-7c4fd9c879-7kttx" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-7kttx test-rollover-deployment-7c4fd9c879- deployment-3134 /api/v1/namespaces/deployment-3134/pods/test-rollover-deployment-7c4fd9c879-7kttx 7ee7c1ec-3f70-43a5-ac8f-a27aff45cc60 8754119 0 2020-05-30 00:57:17 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 9e668ec2-16bd-4fbe-a6b3-8316552f25bc 0xc003fdd017 0xc003fdd018}] [] [{kube-controller-manager Update v1 2020-05-30 00:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9e668ec2-16bd-4fbe-a6b3-8316552f25bc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-30 00:57:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.215\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgmzf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgmzf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgmzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:57:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:57:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:57:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-30 00:57:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.215,StartTime:2020-05-30 00:57:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-30 00:57:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://19a787374c11e9c458d4fe296535e7dbb05229ecd8c1b1cbf802060b3102c346,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:57:31.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3134" for this suite. • [SLOW TEST:23.748 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":247,"skipped":3958,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:57:31.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7270.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7270.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7270.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7270.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7270.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7270.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7270.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7270.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7270.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7270.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 132.45.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.45.132_udp@PTR;check="$$(dig +tcp +noall +answer +search 132.45.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.45.132_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7270.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7270.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7270.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7270.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7270.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7270.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7270.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7270.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7270.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7270.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7270.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 132.45.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.45.132_udp@PTR;check="$$(dig +tcp +noall +answer +search 132.45.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.45.132_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 00:57:38.192: INFO: Unable to read wheezy_udp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:38.215: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:38.221: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:38.226: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:38.320: INFO: Unable to read jessie_udp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:38.322: INFO: Unable to read jessie_tcp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:38.325: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:38.327: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:38.352: INFO: Lookups using dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd failed for: [wheezy_udp@dns-test-service.dns-7270.svc.cluster.local wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local jessie_udp@dns-test-service.dns-7270.svc.cluster.local jessie_tcp@dns-test-service.dns-7270.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local] May 30 00:57:43.358: INFO: Unable to read wheezy_udp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:43.362: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:43.366: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:43.369: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:43.391: INFO: Unable to read jessie_udp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:43.394: INFO: Unable to read jessie_tcp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:43.397: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:43.401: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:43.422: INFO: Lookups using dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd failed for: [wheezy_udp@dns-test-service.dns-7270.svc.cluster.local wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local jessie_udp@dns-test-service.dns-7270.svc.cluster.local jessie_tcp@dns-test-service.dns-7270.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local] May 30 00:57:48.356: INFO: Unable to read wheezy_udp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:48.358: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:48.361: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:48.365: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:48.384: INFO: Unable to read jessie_udp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:48.387: INFO: Unable to read jessie_tcp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:48.389: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:48.392: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:48.465: INFO: Lookups using dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd failed for: [wheezy_udp@dns-test-service.dns-7270.svc.cluster.local wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local jessie_udp@dns-test-service.dns-7270.svc.cluster.local jessie_tcp@dns-test-service.dns-7270.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local] May 30 00:57:53.358: INFO: Unable to read wheezy_udp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:53.362: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:53.365: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:53.368: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:53.391: INFO: Unable to read jessie_udp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:53.394: INFO: Unable to read jessie_tcp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:53.396: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:53.398: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:53.413: INFO: Lookups using dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd failed for: [wheezy_udp@dns-test-service.dns-7270.svc.cluster.local wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local jessie_udp@dns-test-service.dns-7270.svc.cluster.local jessie_tcp@dns-test-service.dns-7270.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local] May 30 00:57:58.358: INFO: Unable to read wheezy_udp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:58.362: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:58.366: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:58.369: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:58.392: INFO: Unable to read jessie_udp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:58.395: INFO: Unable to read jessie_tcp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:58.398: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:58.402: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:57:58.421: INFO: Lookups using dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd failed for: [wheezy_udp@dns-test-service.dns-7270.svc.cluster.local wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local jessie_udp@dns-test-service.dns-7270.svc.cluster.local jessie_tcp@dns-test-service.dns-7270.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local] May 30 00:58:03.358: INFO: Unable to read wheezy_udp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:58:03.363: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:58:03.367: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:58:03.370: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:58:03.387: INFO: Unable to read jessie_udp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:58:03.390: INFO: Unable to read jessie_tcp@dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:58:03.392: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:58:03.395: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local from pod dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd: the server could not find the requested resource (get pods dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd) May 30 00:58:03.412: INFO: Lookups using dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd failed for: [wheezy_udp@dns-test-service.dns-7270.svc.cluster.local wheezy_tcp@dns-test-service.dns-7270.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local jessie_udp@dns-test-service.dns-7270.svc.cluster.local jessie_tcp@dns-test-service.dns-7270.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7270.svc.cluster.local] May 30 00:58:08.420: INFO: DNS probes using dns-7270/dns-test-f02b764a-6477-4408-87f5-aa49cc3bb0cd succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:58:09.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7270" for this suite. • [SLOW TEST:37.658 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":248,"skipped":3974,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:58:09.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 00:58:09.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-828' May 30 00:58:09.696: INFO: stderr: "" May 30 00:58:09.696: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 30 00:58:09.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-828' May 30 00:58:10.019: INFO: stderr: "" May 30 00:58:10.019: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 30 00:58:11.317: INFO: Selector matched 1 pods for map[app:agnhost] May 30 00:58:11.318: INFO: Found 0 / 1 May 30 00:58:12.023: INFO: Selector matched 1 pods for map[app:agnhost] May 30 00:58:12.023: INFO: Found 0 / 1 May 30 00:58:13.056: INFO: Selector matched 1 pods for map[app:agnhost] May 30 00:58:13.056: INFO: Found 1 / 1 May 30 00:58:13.056: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 30 00:58:13.059: INFO: Selector matched 1 pods for map[app:agnhost] May 30 00:58:13.060: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 30 00:58:13.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-qhdsk --namespace=kubectl-828' May 30 00:58:13.188: INFO: stderr: "" May 30 00:58:13.188: INFO: stdout: "Name: agnhost-master-qhdsk\nNamespace: kubectl-828\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Sat, 30 May 2020 00:58:09 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.236\nIPs:\n IP: 10.244.1.236\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://5ea816800db338f88b06e2865b79d96a109a5a58b72662ea74676f5b474e56f0\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 30 May 2020 00:58:12 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-nhdxt (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-nhdxt:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-nhdxt\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-828/agnhost-master-qhdsk to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" May 30 00:58:13.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-828' May 30 00:58:13.309: INFO: stderr: "" May 30 00:58:13.309: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-828\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-qhdsk\n" May 30 00:58:13.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-828' May 30 00:58:13.445: INFO: stderr: "" May 30 00:58:13.445: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-828\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.184.216\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.236:6379\nSession Affinity: None\nEvents: \n" May 30 00:58:13.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 30 00:58:13.614: INFO: stderr: "" May 30 00:58:13.614: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sat, 30 May 2020 00:58:09 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 30 May 2020 00:53:41 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 30 May 2020 00:53:41 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 30 May 2020 00:53:41 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 30 May 2020 00:53:41 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 30d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 30d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 30d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 30d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 30d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 30d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 30 00:58:13.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-828' May 30 00:58:13.718: INFO: stderr: "" May 30 00:58:13.718: INFO: stdout: "Name: kubectl-828\nLabels: e2e-framework=kubectl\n e2e-run=3941a5c5-b09d-49c3-a9d9-6b626e530a9f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:58:13.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-828" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":249,"skipped":4012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:58:13.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:58:20.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1351" for this suite. STEP: Destroying namespace "nsdeletetest-1500" for this suite. May 30 00:58:20.150: INFO: Namespace nsdeletetest-1500 was already deleted STEP: Destroying namespace "nsdeletetest-1221" for this suite. • [SLOW TEST:6.428 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":250,"skipped":4039,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:58:20.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:58:27.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8740" for this suite. • [SLOW TEST:7.077 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":251,"skipped":4051,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:58:27.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 30 00:58:28.271: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 30 00:58:30.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397108, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397108, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397108, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726397108, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 30 00:58:33.347: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:58:33.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1413" for this suite. STEP: Destroying namespace "webhook-1413-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.834 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":252,"skipped":4061,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:58:34.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 30 00:58:34.206: INFO: Waiting up to 5m0s for pod "pod-15d43fed-6fbd-4896-8f6a-73f53eea64c8" in namespace "emptydir-5006" to be "Succeeded or Failed" May 30 00:58:34.228: INFO: Pod "pod-15d43fed-6fbd-4896-8f6a-73f53eea64c8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.235335ms May 30 00:58:36.232: INFO: Pod "pod-15d43fed-6fbd-4896-8f6a-73f53eea64c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026292084s May 30 00:58:38.236: INFO: Pod "pod-15d43fed-6fbd-4896-8f6a-73f53eea64c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03058183s STEP: Saw pod success May 30 00:58:38.236: INFO: Pod "pod-15d43fed-6fbd-4896-8f6a-73f53eea64c8" satisfied condition "Succeeded or Failed" May 30 00:58:38.240: INFO: Trying to get logs from node latest-worker2 pod pod-15d43fed-6fbd-4896-8f6a-73f53eea64c8 container test-container: STEP: delete the pod May 30 00:58:38.280: INFO: Waiting for pod pod-15d43fed-6fbd-4896-8f6a-73f53eea64c8 to disappear May 30 00:58:38.291: INFO: Pod pod-15d43fed-6fbd-4896-8f6a-73f53eea64c8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:58:38.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5006" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":253,"skipped":4076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:58:38.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 30 00:58:38.684: INFO: Waiting up to 5m0s for pod "client-containers-290e8ad7-9907-433c-bd38-c7350239568a" in namespace "containers-1385" to be "Succeeded or Failed" May 30 00:58:38.717: INFO: Pod "client-containers-290e8ad7-9907-433c-bd38-c7350239568a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.65804ms May 30 00:58:40.722: INFO: Pod "client-containers-290e8ad7-9907-433c-bd38-c7350239568a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037260418s May 30 00:58:42.726: INFO: Pod "client-containers-290e8ad7-9907-433c-bd38-c7350239568a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041911608s STEP: Saw pod success May 30 00:58:42.726: INFO: Pod "client-containers-290e8ad7-9907-433c-bd38-c7350239568a" satisfied condition "Succeeded or Failed" May 30 00:58:42.730: INFO: Trying to get logs from node latest-worker pod client-containers-290e8ad7-9907-433c-bd38-c7350239568a container test-container: STEP: delete the pod May 30 00:58:42.779: INFO: Waiting for pod client-containers-290e8ad7-9907-433c-bd38-c7350239568a to disappear May 30 00:58:42.792: INFO: Pod client-containers-290e8ad7-9907-433c-bd38-c7350239568a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:58:42.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1385" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":254,"skipped":4117,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:58:42.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:58:42.900: INFO: Waiting up to 5m0s for pod "downwardapi-volume-941c9da4-9980-45f8-831c-e2946c22d84a" in namespace "projected-5930" to be "Succeeded or Failed" May 30 00:58:42.919: INFO: Pod "downwardapi-volume-941c9da4-9980-45f8-831c-e2946c22d84a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.829353ms May 30 00:58:44.939: INFO: Pod "downwardapi-volume-941c9da4-9980-45f8-831c-e2946c22d84a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038645425s May 30 00:58:46.943: INFO: Pod "downwardapi-volume-941c9da4-9980-45f8-831c-e2946c22d84a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043041209s STEP: Saw pod success May 30 00:58:46.943: INFO: Pod "downwardapi-volume-941c9da4-9980-45f8-831c-e2946c22d84a" satisfied condition "Succeeded or Failed" May 30 00:58:46.947: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-941c9da4-9980-45f8-831c-e2946c22d84a container client-container: STEP: delete the pod May 30 00:58:47.035: INFO: Waiting for pod downwardapi-volume-941c9da4-9980-45f8-831c-e2946c22d84a to disappear May 30 00:58:47.037: INFO: Pod downwardapi-volume-941c9da4-9980-45f8-831c-e2946c22d84a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:58:47.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5930" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":255,"skipped":4119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:58:47.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8798.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8798.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8798.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8798.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8798.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8798.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 30 00:58:53.216: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:58:53.220: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:58:53.222: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:58:53.225: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:58:53.232: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:58:53.234: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:58:53.236: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:58:53.238: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:58:53.243: INFO: Lookups using dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8798.svc.cluster.local jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local] May 30 00:58:58.258: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:58:58.261: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:58:58.275: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:58:58.277: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:58:58.283: INFO: Lookups using dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309 failed for: [wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local] May 30 00:59:03.256: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:03.259: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:03.273: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:03.276: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:03.283: INFO: Lookups using dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309 failed for: [wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local] May 30 00:59:08.253: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:08.256: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:08.273: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:08.276: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:08.282: INFO: Lookups using dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309 failed for: [wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local] May 30 00:59:13.255: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:13.258: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:13.272: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:13.275: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:13.282: INFO: Lookups using dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309 failed for: [wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local] May 30 00:59:18.257: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:18.260: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:18.272: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:18.275: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local from pod dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309: the server could not find the requested resource (get pods dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309) May 30 00:59:18.280: INFO: Lookups using dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309 failed for: [wheezy_udp@dns-test-service-2.dns-8798.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8798.svc.cluster.local jessie_udp@dns-test-service-2.dns-8798.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8798.svc.cluster.local] May 30 00:59:23.277: INFO: DNS probes using dns-8798/dns-test-ca8964ef-b59b-4ab4-8740-468e590ad309 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:59:23.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8798" for this suite. • [SLOW TEST:36.979 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":256,"skipped":4155,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:59:24.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-586 STEP: creating service affinity-nodeport-transition in namespace services-586 STEP: creating replication controller affinity-nodeport-transition in namespace services-586 I0530 00:59:24.293711 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-586, replica count: 3 I0530 00:59:27.344201 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 00:59:30.344476 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 00:59:30.354: INFO: Creating new exec pod May 30 00:59:35.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-586 execpod-affinityzpddk -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 30 00:59:35.602: INFO: stderr: "I0530 00:59:35.533300 3582 log.go:172] (0xc000608d10) (0xc000b36320) Create stream\nI0530 00:59:35.533362 3582 log.go:172] (0xc000608d10) (0xc000b36320) Stream added, broadcasting: 1\nI0530 00:59:35.537582 3582 log.go:172] (0xc000608d10) Reply frame received for 1\nI0530 00:59:35.537628 3582 log.go:172] (0xc000608d10) (0xc00083a640) Create stream\nI0530 00:59:35.537653 3582 log.go:172] (0xc000608d10) (0xc00083a640) Stream added, broadcasting: 3\nI0530 00:59:35.538571 3582 log.go:172] (0xc000608d10) Reply frame received for 3\nI0530 00:59:35.538601 3582 log.go:172] (0xc000608d10) (0xc0006be5a0) Create stream\nI0530 00:59:35.538610 3582 log.go:172] (0xc000608d10) (0xc0006be5a0) Stream added, broadcasting: 5\nI0530 00:59:35.539509 3582 log.go:172] (0xc000608d10) Reply frame received for 5\nI0530 00:59:35.591564 3582 log.go:172] (0xc000608d10) Data frame received for 5\nI0530 00:59:35.591594 3582 log.go:172] (0xc0006be5a0) (5) Data frame handling\nI0530 00:59:35.591615 3582 log.go:172] (0xc0006be5a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0530 00:59:35.592113 3582 log.go:172] (0xc000608d10) Data frame received for 5\nI0530 00:59:35.592160 3582 log.go:172] (0xc0006be5a0) (5) Data frame handling\nI0530 00:59:35.592196 3582 log.go:172] (0xc0006be5a0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0530 00:59:35.592685 3582 log.go:172] (0xc000608d10) Data frame received for 5\nI0530 00:59:35.592720 3582 log.go:172] (0xc0006be5a0) (5) Data frame handling\nI0530 00:59:35.592849 3582 log.go:172] (0xc000608d10) Data frame received for 3\nI0530 00:59:35.592879 3582 log.go:172] (0xc00083a640) (3) Data frame handling\nI0530 00:59:35.594942 3582 log.go:172] (0xc000608d10) Data frame received for 1\nI0530 00:59:35.594988 3582 log.go:172] (0xc000b36320) (1) Data frame handling\nI0530 00:59:35.595027 3582 log.go:172] (0xc000b36320) (1) Data frame sent\nI0530 00:59:35.595078 3582 log.go:172] (0xc000608d10) (0xc000b36320) Stream removed, broadcasting: 1\nI0530 00:59:35.595123 3582 log.go:172] (0xc000608d10) Go away received\nI0530 00:59:35.595498 3582 log.go:172] (0xc000608d10) (0xc000b36320) Stream removed, broadcasting: 1\nI0530 00:59:35.595530 3582 log.go:172] (0xc000608d10) (0xc00083a640) Stream removed, broadcasting: 3\nI0530 00:59:35.595545 3582 log.go:172] (0xc000608d10) (0xc0006be5a0) Stream removed, broadcasting: 5\n" May 30 00:59:35.602: INFO: stdout: "" May 30 00:59:35.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-586 execpod-affinityzpddk -- /bin/sh -x -c nc -zv -t -w 2 10.98.95.58 80' May 30 00:59:35.831: INFO: stderr: "I0530 00:59:35.735407 3602 log.go:172] (0xc000ad68f0) (0xc0002da5a0) Create stream\nI0530 00:59:35.735471 3602 log.go:172] (0xc000ad68f0) (0xc0002da5a0) Stream added, broadcasting: 1\nI0530 00:59:35.738193 3602 log.go:172] (0xc000ad68f0) Reply frame received for 1\nI0530 00:59:35.738235 3602 log.go:172] (0xc000ad68f0) (0xc0000dcf00) Create stream\nI0530 00:59:35.738247 3602 log.go:172] (0xc000ad68f0) (0xc0000dcf00) Stream added, broadcasting: 3\nI0530 00:59:35.739161 3602 log.go:172] (0xc000ad68f0) Reply frame received for 3\nI0530 00:59:35.739192 3602 log.go:172] (0xc000ad68f0) (0xc0001397c0) Create stream\nI0530 00:59:35.739200 3602 log.go:172] (0xc000ad68f0) (0xc0001397c0) Stream added, broadcasting: 5\nI0530 00:59:35.740124 3602 log.go:172] (0xc000ad68f0) Reply frame received for 5\nI0530 00:59:35.818070 3602 log.go:172] (0xc000ad68f0) Data frame received for 3\nI0530 00:59:35.818130 3602 log.go:172] (0xc0000dcf00) (3) Data frame handling\nI0530 00:59:35.818162 3602 log.go:172] (0xc000ad68f0) Data frame received for 5\nI0530 00:59:35.818177 3602 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0530 00:59:35.818188 3602 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ nc -zv -t -w 2 10.98.95.58 80\nConnection to 10.98.95.58 80 port [tcp/http] succeeded!\nI0530 00:59:35.818328 3602 log.go:172] (0xc000ad68f0) Data frame received for 5\nI0530 00:59:35.818363 3602 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0530 00:59:35.821957 3602 log.go:172] (0xc000ad68f0) Data frame received for 1\nI0530 00:59:35.821995 3602 log.go:172] (0xc0002da5a0) (1) Data frame handling\nI0530 00:59:35.822017 3602 log.go:172] (0xc0002da5a0) (1) Data frame sent\nI0530 00:59:35.823719 3602 log.go:172] (0xc000ad68f0) (0xc0002da5a0) Stream removed, broadcasting: 1\nI0530 00:59:35.824156 3602 log.go:172] (0xc000ad68f0) (0xc0002da5a0) Stream removed, broadcasting: 1\nI0530 00:59:35.824179 3602 log.go:172] (0xc000ad68f0) (0xc0000dcf00) Stream removed, broadcasting: 3\nI0530 00:59:35.824371 3602 log.go:172] (0xc000ad68f0) (0xc0001397c0) Stream removed, broadcasting: 5\n" May 30 00:59:35.831: INFO: stdout: "" May 30 00:59:35.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-586 execpod-affinityzpddk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31375' May 30 00:59:36.052: INFO: stderr: "I0530 00:59:35.972362 3622 log.go:172] (0xc000a78a50) (0xc00062af00) Create stream\nI0530 00:59:35.972428 3622 log.go:172] (0xc000a78a50) (0xc00062af00) Stream added, broadcasting: 1\nI0530 00:59:35.978106 3622 log.go:172] (0xc000a78a50) Reply frame received for 1\nI0530 00:59:35.978155 3622 log.go:172] (0xc000a78a50) (0xc0005cb040) Create stream\nI0530 00:59:35.978169 3622 log.go:172] (0xc000a78a50) (0xc0005cb040) Stream added, broadcasting: 3\nI0530 00:59:35.979250 3622 log.go:172] (0xc000a78a50) Reply frame received for 3\nI0530 00:59:35.979431 3622 log.go:172] (0xc000a78a50) (0xc000520280) Create stream\nI0530 00:59:35.979447 3622 log.go:172] (0xc000a78a50) (0xc000520280) Stream added, broadcasting: 5\nI0530 00:59:35.980427 3622 log.go:172] (0xc000a78a50) Reply frame received for 5\nI0530 00:59:36.045053 3622 log.go:172] (0xc000a78a50) Data frame received for 5\nI0530 00:59:36.045095 3622 log.go:172] (0xc000520280) (5) Data frame handling\nI0530 00:59:36.045338 3622 log.go:172] (0xc000520280) (5) Data frame sent\nI0530 00:59:36.045382 3622 log.go:172] (0xc000a78a50) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.13 31375\nI0530 00:59:36.045401 3622 log.go:172] (0xc000520280) (5) Data frame handling\nI0530 00:59:36.045480 3622 log.go:172] (0xc000520280) (5) Data frame sent\nI0530 00:59:36.045504 3622 log.go:172] (0xc000a78a50) Data frame received for 5\nI0530 00:59:36.045523 3622 log.go:172] (0xc000520280) (5) Data frame handling\nConnection to 172.17.0.13 31375 port [tcp/31375] succeeded!\nI0530 00:59:36.045753 3622 log.go:172] (0xc000a78a50) Data frame received for 3\nI0530 00:59:36.045771 3622 log.go:172] (0xc0005cb040) (3) Data frame handling\nI0530 00:59:36.047146 3622 log.go:172] (0xc000a78a50) Data frame received for 1\nI0530 00:59:36.047174 3622 log.go:172] (0xc00062af00) (1) Data frame handling\nI0530 00:59:36.047185 3622 log.go:172] (0xc00062af00) (1) Data frame sent\nI0530 00:59:36.047196 3622 log.go:172] (0xc000a78a50) (0xc00062af00) Stream removed, broadcasting: 1\nI0530 00:59:36.047210 3622 log.go:172] (0xc000a78a50) Go away received\nI0530 00:59:36.047687 3622 log.go:172] (0xc000a78a50) (0xc00062af00) Stream removed, broadcasting: 1\nI0530 00:59:36.047713 3622 log.go:172] (0xc000a78a50) (0xc0005cb040) Stream removed, broadcasting: 3\nI0530 00:59:36.047725 3622 log.go:172] (0xc000a78a50) (0xc000520280) Stream removed, broadcasting: 5\n" May 30 00:59:36.052: INFO: stdout: "" May 30 00:59:36.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-586 execpod-affinityzpddk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31375' May 30 00:59:36.294: INFO: stderr: "I0530 00:59:36.198506 3643 log.go:172] (0xc000b774a0) (0xc000862e60) Create stream\nI0530 00:59:36.198590 3643 log.go:172] (0xc000b774a0) (0xc000862e60) Stream added, broadcasting: 1\nI0530 00:59:36.201863 3643 log.go:172] (0xc000b774a0) Reply frame received for 1\nI0530 00:59:36.202029 3643 log.go:172] (0xc000b774a0) (0xc000b661e0) Create stream\nI0530 00:59:36.202116 3643 log.go:172] (0xc000b774a0) (0xc000b661e0) Stream added, broadcasting: 3\nI0530 00:59:36.204093 3643 log.go:172] (0xc000b774a0) Reply frame received for 3\nI0530 00:59:36.204123 3643 log.go:172] (0xc000b774a0) (0xc00023a640) Create stream\nI0530 00:59:36.204132 3643 log.go:172] (0xc000b774a0) (0xc00023a640) Stream added, broadcasting: 5\nI0530 00:59:36.205411 3643 log.go:172] (0xc000b774a0) Reply frame received for 5\nI0530 00:59:36.285574 3643 log.go:172] (0xc000b774a0) Data frame received for 5\nI0530 00:59:36.285626 3643 log.go:172] (0xc00023a640) (5) Data frame handling\nI0530 00:59:36.285649 3643 log.go:172] (0xc00023a640) (5) Data frame sent\nI0530 00:59:36.285665 3643 log.go:172] (0xc000b774a0) Data frame received for 5\nI0530 00:59:36.285681 3643 log.go:172] (0xc00023a640) (5) Data frame handling\nI0530 00:59:36.285714 3643 log.go:172] (0xc000b774a0) Data frame received for 3\nI0530 00:59:36.285734 3643 log.go:172] (0xc000b661e0) (3) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31375\nConnection to 172.17.0.12 31375 port [tcp/31375] succeeded!\nI0530 00:59:36.286785 3643 log.go:172] (0xc000b774a0) Data frame received for 1\nI0530 00:59:36.286813 3643 log.go:172] (0xc000862e60) (1) Data frame handling\nI0530 00:59:36.286846 3643 log.go:172] (0xc000862e60) (1) Data frame sent\nI0530 00:59:36.286873 3643 log.go:172] (0xc000b774a0) (0xc000862e60) Stream removed, broadcasting: 1\nI0530 00:59:36.286901 3643 log.go:172] (0xc000b774a0) Go away received\nI0530 00:59:36.287348 3643 log.go:172] (0xc000b774a0) (0xc000862e60) Stream removed, broadcasting: 1\nI0530 00:59:36.287371 3643 log.go:172] (0xc000b774a0) (0xc000b661e0) Stream removed, broadcasting: 3\nI0530 00:59:36.287394 3643 log.go:172] (0xc000b774a0) (0xc00023a640) Stream removed, broadcasting: 5\n" May 30 00:59:36.294: INFO: stdout: "" May 30 00:59:36.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-586 execpod-affinityzpddk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31375/ ; done' May 30 00:59:36.619: INFO: stderr: "I0530 00:59:36.459791 3666 log.go:172] (0xc000be7080) (0xc0001be820) Create stream\nI0530 00:59:36.459860 3666 log.go:172] (0xc000be7080) (0xc0001be820) Stream added, broadcasting: 1\nI0530 00:59:36.463270 3666 log.go:172] (0xc000be7080) Reply frame received for 1\nI0530 00:59:36.463334 3666 log.go:172] (0xc000be7080) (0xc00032c000) Create stream\nI0530 00:59:36.463364 3666 log.go:172] (0xc000be7080) (0xc00032c000) Stream added, broadcasting: 3\nI0530 00:59:36.464444 3666 log.go:172] (0xc000be7080) Reply frame received for 3\nI0530 00:59:36.464498 3666 log.go:172] (0xc000be7080) (0xc00032c780) Create stream\nI0530 00:59:36.464514 3666 log.go:172] (0xc000be7080) (0xc00032c780) Stream added, broadcasting: 5\nI0530 00:59:36.466372 3666 log.go:172] (0xc000be7080) Reply frame received for 5\nI0530 00:59:36.522489 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.522536 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.522555 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.522575 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.522586 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.522607 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.528643 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.528671 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.528699 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.529577 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.529597 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.529612 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.529950 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.529978 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.530001 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.537339 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.537374 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.537407 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.538003 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.538040 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.538053 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.538072 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.538089 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.538100 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.542983 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.543018 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.543049 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.543486 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.543516 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.543532 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.543555 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.543582 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.543608 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.548808 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.548854 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.548883 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.549645 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.549666 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.549695 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.549722 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.549749 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.549765 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.556147 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.556170 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.556182 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.556831 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.556859 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.556875 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.556910 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.556926 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.556953 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.561003 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.561021 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.561034 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.562045 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.562065 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.562076 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.562090 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.562098 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.562106 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.566306 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.566320 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.566328 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.566983 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.566993 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.566999 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.567085 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.567117 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.567137 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.572702 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.572722 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.572736 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.573291 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.573318 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.573335 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.573439 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.573455 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.573469 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.576844 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.576868 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.576883 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.577576 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.577597 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.577604 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.577617 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.577623 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.577630 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.582015 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.582039 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.582055 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.582495 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.582510 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.582521 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.582542 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.582552 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.582565 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.586239 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.586255 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.586271 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.586540 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.586560 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.586570 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.586669 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.586691 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.586708 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.591303 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.591321 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.591343 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.591809 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.591819 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.591835 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.591858 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.591875 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.591894 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.595443 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.595467 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.595491 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.595838 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.595864 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.595871 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.595895 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.595917 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.595935 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.599437 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.599453 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.599465 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.600341 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.600360 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.600368 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.600388 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.600406 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.600414 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.605731 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.605756 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.605782 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.606810 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.606837 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.606863 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.606883 3666 log.go:172] (0xc00032c780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.606913 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.606934 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.610313 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.610341 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.610360 3666 log.go:172] (0xc00032c000) (3) Data frame sent\nI0530 00:59:36.611029 3666 log.go:172] (0xc000be7080) Data frame received for 3\nI0530 00:59:36.611051 3666 log.go:172] (0xc00032c000) (3) Data frame handling\nI0530 00:59:36.611092 3666 log.go:172] (0xc000be7080) Data frame received for 5\nI0530 00:59:36.611120 3666 log.go:172] (0xc00032c780) (5) Data frame handling\nI0530 00:59:36.612781 3666 log.go:172] (0xc000be7080) Data frame received for 1\nI0530 00:59:36.612812 3666 log.go:172] (0xc0001be820) (1) Data frame handling\nI0530 00:59:36.612842 3666 log.go:172] (0xc0001be820) (1) Data frame sent\nI0530 00:59:36.612866 3666 log.go:172] (0xc000be7080) (0xc0001be820) Stream removed, broadcasting: 1\nI0530 00:59:36.612888 3666 log.go:172] (0xc000be7080) Go away received\nI0530 00:59:36.613449 3666 log.go:172] (0xc000be7080) (0xc0001be820) Stream removed, broadcasting: 1\nI0530 00:59:36.613472 3666 log.go:172] (0xc000be7080) (0xc00032c000) Stream removed, broadcasting: 3\nI0530 00:59:36.613494 3666 log.go:172] (0xc000be7080) (0xc00032c780) Stream removed, broadcasting: 5\n" May 30 00:59:36.620: INFO: stdout: "\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-7wlzk\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-tnlml\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-7wlzk\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-7wlzk\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-tnlml\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-tnlml" May 30 00:59:36.620: INFO: Received response from host: May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-7wlzk May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-tnlml May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-7wlzk May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-7wlzk May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-tnlml May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.620: INFO: Received response from host: affinity-nodeport-transition-tnlml May 30 00:59:36.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-586 execpod-affinityzpddk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31375/ ; done' May 30 00:59:36.958: INFO: stderr: "I0530 00:59:36.785879 3687 log.go:172] (0xc000bf2000) (0xc0005201e0) Create stream\nI0530 00:59:36.785951 3687 log.go:172] (0xc000bf2000) (0xc0005201e0) Stream added, broadcasting: 1\nI0530 00:59:36.788110 3687 log.go:172] (0xc000bf2000) Reply frame received for 1\nI0530 00:59:36.788148 3687 log.go:172] (0xc000bf2000) (0xc000521180) Create stream\nI0530 00:59:36.788155 3687 log.go:172] (0xc000bf2000) (0xc000521180) Stream added, broadcasting: 3\nI0530 00:59:36.789009 3687 log.go:172] (0xc000bf2000) Reply frame received for 3\nI0530 00:59:36.789049 3687 log.go:172] (0xc000bf2000) (0xc000482d20) Create stream\nI0530 00:59:36.789069 3687 log.go:172] (0xc000bf2000) (0xc000482d20) Stream added, broadcasting: 5\nI0530 00:59:36.790050 3687 log.go:172] (0xc000bf2000) Reply frame received for 5\nI0530 00:59:36.870155 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.870189 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.870200 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.870214 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.870220 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.870228 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.872774 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.872886 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.872936 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.873074 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.873323 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.873360 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.873387 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.873427 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.873448 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.876364 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.876389 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.876408 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.876731 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.876748 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.876759 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.876805 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.876837 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.876858 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.883564 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.883580 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.883610 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.884185 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.884237 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.884252 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.884268 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.884278 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.884288 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.888913 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.888938 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.888959 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.889512 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.889537 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.889554 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ echo\nI0530 00:59:36.889669 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.889679 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.889686 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.889713 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.889749 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.889774 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.898824 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.898841 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.898849 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.899585 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.899607 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.899624 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.899741 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.899775 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.899795 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.903703 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.903722 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.903743 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.903844 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.903862 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.903870 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.903879 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.903884 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.903889 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.907876 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.907903 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.907922 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.908148 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.908159 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.908172 3687 log.go:172] (0xc000482d20) (5) Data frame sent\nI0530 00:59:36.908181 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.908188 3687 log.go:172] (0xc000482d20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.908203 3687 log.go:172] (0xc000482d20) (5) Data frame sent\nI0530 00:59:36.908240 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.908255 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.908273 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.912795 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.912812 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.912825 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.913551 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.913563 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.913570 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.913585 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.913599 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.913609 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.917316 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.917350 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.917368 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.917624 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.917645 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.917677 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.917694 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.917709 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.917721 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.920816 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.920838 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.920856 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.921300 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.921325 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.921334 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.921346 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.921354 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.921360 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.926365 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.926383 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.926397 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.927055 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.927082 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.927094 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.927109 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.927124 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.927133 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.930759 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.930781 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.930799 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.931139 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.931162 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.931174 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.931194 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.931204 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.931213 3687 log.go:172] (0xc000482d20) (5) Data frame sent\nI0530 00:59:36.931222 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.931239 3687 log.go:172] (0xc000482d20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.931259 3687 log.go:172] (0xc000482d20) (5) Data frame sent\nI0530 00:59:36.936625 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.936644 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.936665 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.937241 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.937261 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.937269 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0530 00:59:36.937486 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.937497 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.937504 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.937543 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.937569 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.937599 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n http://172.17.0.13:31375/\nI0530 00:59:36.940851 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.940864 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.940881 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.941808 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.941838 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.941861 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.941874 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.941892 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.941901 3687 log.go:172] (0xc000482d20) (5) Data frame sent\nI0530 00:59:36.941917 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.941925 3687 log.go:172] (0xc000482d20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/\nI0530 00:59:36.941944 3687 log.go:172] (0xc000482d20) (5) Data frame sent\nI0530 00:59:36.946278 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.946313 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.946341 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.946828 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.946862 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.946878 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.946898 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.946907 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.946921 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31375/I0530 00:59:36.946941 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.946960 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.946976 3687 log.go:172] (0xc000482d20) (5) Data frame sent\n\nI0530 00:59:36.950356 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.950386 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.950397 3687 log.go:172] (0xc000521180) (3) Data frame sent\nI0530 00:59:36.950847 3687 log.go:172] (0xc000bf2000) Data frame received for 3\nI0530 00:59:36.950857 3687 log.go:172] (0xc000521180) (3) Data frame handling\nI0530 00:59:36.951186 3687 log.go:172] (0xc000bf2000) Data frame received for 5\nI0530 00:59:36.951200 3687 log.go:172] (0xc000482d20) (5) Data frame handling\nI0530 00:59:36.953964 3687 log.go:172] (0xc000bf2000) Data frame received for 1\nI0530 00:59:36.953996 3687 log.go:172] (0xc0005201e0) (1) Data frame handling\nI0530 00:59:36.954015 3687 log.go:172] (0xc0005201e0) (1) Data frame sent\nI0530 00:59:36.954030 3687 log.go:172] (0xc000bf2000) (0xc0005201e0) Stream removed, broadcasting: 1\nI0530 00:59:36.954048 3687 log.go:172] (0xc000bf2000) Go away received\nI0530 00:59:36.954527 3687 log.go:172] (0xc000bf2000) (0xc0005201e0) Stream removed, broadcasting: 1\nI0530 00:59:36.954558 3687 log.go:172] (0xc000bf2000) (0xc000521180) Stream removed, broadcasting: 3\nI0530 00:59:36.954576 3687 log.go:172] (0xc000bf2000) (0xc000482d20) Stream removed, broadcasting: 5\n" May 30 00:59:36.959: INFO: stdout: "\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz\naffinity-nodeport-transition-lzbhz" May 30 00:59:36.959: INFO: Received response from host: May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Received response from host: affinity-nodeport-transition-lzbhz May 30 00:59:36.959: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-586, will wait for the garbage collector to delete the pods May 30 00:59:37.104: INFO: Deleting ReplicationController affinity-nodeport-transition took: 5.996793ms May 30 00:59:37.505: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.321964ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:59:54.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-586" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:30.974 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":257,"skipped":4159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:59:54.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a9d847aa-e293-4b5f-a447-9c797cd524bc STEP: Creating a pod to test consume secrets May 30 00:59:55.133: INFO: Waiting up to 5m0s for pod "pod-secrets-ad5e8a26-4998-45be-9728-4406965c2fb6" in namespace "secrets-455" to be "Succeeded or Failed" May 30 00:59:55.221: INFO: Pod "pod-secrets-ad5e8a26-4998-45be-9728-4406965c2fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 87.879936ms May 30 00:59:57.312: INFO: Pod "pod-secrets-ad5e8a26-4998-45be-9728-4406965c2fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179186416s May 30 00:59:59.359: INFO: Pod "pod-secrets-ad5e8a26-4998-45be-9728-4406965c2fb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.225746389s STEP: Saw pod success May 30 00:59:59.359: INFO: Pod "pod-secrets-ad5e8a26-4998-45be-9728-4406965c2fb6" satisfied condition "Succeeded or Failed" May 30 00:59:59.362: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-ad5e8a26-4998-45be-9728-4406965c2fb6 container secret-volume-test: STEP: delete the pod May 30 00:59:59.404: INFO: Waiting for pod pod-secrets-ad5e8a26-4998-45be-9728-4406965c2fb6 to disappear May 30 00:59:59.421: INFO: Pod pod-secrets-ad5e8a26-4998-45be-9728-4406965c2fb6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 00:59:59.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-455" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":258,"skipped":4256,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 00:59:59.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 30 00:59:59.748: INFO: Waiting up to 5m0s for pod "downwardapi-volume-727d7ef7-40e5-41ab-927e-4dce28cc41c7" in namespace "projected-8341" to be "Succeeded or Failed" May 30 00:59:59.758: INFO: Pod "downwardapi-volume-727d7ef7-40e5-41ab-927e-4dce28cc41c7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.555194ms May 30 01:00:01.784: INFO: Pod "downwardapi-volume-727d7ef7-40e5-41ab-927e-4dce28cc41c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036592302s May 30 01:00:03.788: INFO: Pod "downwardapi-volume-727d7ef7-40e5-41ab-927e-4dce28cc41c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040157222s STEP: Saw pod success May 30 01:00:03.788: INFO: Pod "downwardapi-volume-727d7ef7-40e5-41ab-927e-4dce28cc41c7" satisfied condition "Succeeded or Failed" May 30 01:00:03.795: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-727d7ef7-40e5-41ab-927e-4dce28cc41c7 container client-container: STEP: delete the pod May 30 01:00:03.875: INFO: Waiting for pod downwardapi-volume-727d7ef7-40e5-41ab-927e-4dce28cc41c7 to disappear May 30 01:00:03.879: INFO: Pod downwardapi-volume-727d7ef7-40e5-41ab-927e-4dce28cc41c7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:00:03.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8341" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":259,"skipped":4272,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:00:03.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:00:04.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9182" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":260,"skipped":4282,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:00:04.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 30 01:00:04.188: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 30 01:00:04.211: INFO: Waiting for terminating namespaces to be deleted... May 30 01:00:04.215: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 30 01:00:04.221: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 30 01:00:04.221: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 30 01:00:04.221: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 30 01:00:04.221: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 30 01:00:04.221: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 30 01:00:04.221: INFO: Container kindnet-cni ready: true, restart count 2 May 30 01:00:04.221: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 30 01:00:04.221: INFO: Container kube-proxy ready: true, restart count 0 May 30 01:00:04.221: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 30 01:00:04.226: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 30 01:00:04.226: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 30 01:00:04.226: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 30 01:00:04.226: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 30 01:00:04.226: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 30 01:00:04.226: INFO: Container kindnet-cni ready: true, restart count 2 May 30 01:00:04.226: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 30 01:00:04.226: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6192183d-a296-4030-ab0d-276fec6b7cef 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-6192183d-a296-4030-ab0d-276fec6b7cef off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-6192183d-a296-4030-ab0d-276fec6b7cef [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:00:20.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-860" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.482 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":261,"skipped":4288,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:00:20.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 30 01:00:20.670: INFO: Pod name pod-release: Found 0 pods out of 1 May 30 01:00:25.674: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:00:25.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4077" for this suite. • [SLOW TEST:5.718 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":262,"skipped":4296,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:00:26.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-40c1ad71-1dff-436e-8fbe-aba2336e669f May 30 01:00:26.656: INFO: Pod name my-hostname-basic-40c1ad71-1dff-436e-8fbe-aba2336e669f: Found 0 pods out of 1 May 30 01:00:31.878: INFO: Pod name my-hostname-basic-40c1ad71-1dff-436e-8fbe-aba2336e669f: Found 1 pods out of 1 May 30 01:00:31.878: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-40c1ad71-1dff-436e-8fbe-aba2336e669f" are running May 30 01:00:31.947: INFO: Pod "my-hostname-basic-40c1ad71-1dff-436e-8fbe-aba2336e669f-95wn9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 01:00:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 01:00:31 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 01:00:31 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-30 01:00:26 +0000 UTC Reason: Message:}]) May 30 01:00:31.948: INFO: Trying to dial the pod May 30 01:00:36.962: INFO: Controller my-hostname-basic-40c1ad71-1dff-436e-8fbe-aba2336e669f: Got expected result from replica 1 [my-hostname-basic-40c1ad71-1dff-436e-8fbe-aba2336e669f-95wn9]: "my-hostname-basic-40c1ad71-1dff-436e-8fbe-aba2336e669f-95wn9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:00:36.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7206" for this suite. • [SLOW TEST:10.683 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":263,"skipped":4304,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:00:36.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-3021 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3021 STEP: creating replication controller externalsvc in namespace services-3021 I0530 01:00:37.245398 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3021, replica count: 2 I0530 01:00:40.295818 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 01:00:43.296076 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 30 01:00:43.370: INFO: Creating new exec pod May 30 01:00:47.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3021 execpod2nqg6 -- /bin/sh -x -c nslookup nodeport-service' May 30 01:00:47.810: INFO: stderr: "I0530 01:00:47.557911 3707 log.go:172] (0xc000896fd0) (0xc000abe0a0) Create stream\nI0530 01:00:47.557981 3707 log.go:172] (0xc000896fd0) (0xc000abe0a0) Stream added, broadcasting: 1\nI0530 01:00:47.561479 3707 log.go:172] (0xc000896fd0) Reply frame received for 1\nI0530 01:00:47.561511 3707 log.go:172] (0xc000896fd0) (0xc00071e000) Create stream\nI0530 01:00:47.561523 3707 log.go:172] (0xc000896fd0) (0xc00071e000) Stream added, broadcasting: 3\nI0530 01:00:47.562392 3707 log.go:172] (0xc000896fd0) Reply frame received for 3\nI0530 01:00:47.562428 3707 log.go:172] (0xc000896fd0) (0xc00040ce60) Create stream\nI0530 01:00:47.562440 3707 log.go:172] (0xc000896fd0) (0xc00040ce60) Stream added, broadcasting: 5\nI0530 01:00:47.563146 3707 log.go:172] (0xc000896fd0) Reply frame received for 5\nI0530 01:00:47.639376 3707 log.go:172] (0xc000896fd0) Data frame received for 5\nI0530 01:00:47.639412 3707 log.go:172] (0xc00040ce60) (5) Data frame handling\nI0530 01:00:47.639439 3707 log.go:172] (0xc00040ce60) (5) Data frame sent\n+ nslookup nodeport-service\nI0530 01:00:47.798202 3707 log.go:172] (0xc000896fd0) Data frame received for 3\nI0530 01:00:47.798232 3707 log.go:172] (0xc00071e000) (3) Data frame handling\nI0530 01:00:47.798254 3707 log.go:172] (0xc00071e000) (3) Data frame sent\nI0530 01:00:47.799890 3707 log.go:172] (0xc000896fd0) Data frame received for 3\nI0530 01:00:47.799909 3707 log.go:172] (0xc00071e000) (3) Data frame handling\nI0530 01:00:47.799917 3707 log.go:172] (0xc00071e000) (3) Data frame sent\nI0530 01:00:47.800664 3707 log.go:172] (0xc000896fd0) Data frame received for 5\nI0530 01:00:47.800681 3707 log.go:172] (0xc00040ce60) (5) Data frame handling\nI0530 01:00:47.800724 3707 log.go:172] (0xc000896fd0) Data frame received for 3\nI0530 01:00:47.800754 3707 log.go:172] (0xc00071e000) (3) Data frame handling\nI0530 01:00:47.802724 3707 log.go:172] (0xc000896fd0) Data frame received for 1\nI0530 01:00:47.802746 3707 log.go:172] (0xc000abe0a0) (1) Data frame handling\nI0530 01:00:47.802766 3707 log.go:172] (0xc000abe0a0) (1) Data frame sent\nI0530 01:00:47.802837 3707 log.go:172] (0xc000896fd0) (0xc000abe0a0) Stream removed, broadcasting: 1\nI0530 01:00:47.803034 3707 log.go:172] (0xc000896fd0) Go away received\nI0530 01:00:47.803131 3707 log.go:172] (0xc000896fd0) (0xc000abe0a0) Stream removed, broadcasting: 1\nI0530 01:00:47.803149 3707 log.go:172] (0xc000896fd0) (0xc00071e000) Stream removed, broadcasting: 3\nI0530 01:00:47.803155 3707 log.go:172] (0xc000896fd0) (0xc00040ce60) Stream removed, broadcasting: 5\n" May 30 01:00:47.810: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3021.svc.cluster.local\tcanonical name = externalsvc.services-3021.svc.cluster.local.\nName:\texternalsvc.services-3021.svc.cluster.local\nAddress: 10.106.98.243\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3021, will wait for the garbage collector to delete the pods May 30 01:00:47.871: INFO: Deleting ReplicationController externalsvc took: 7.12807ms May 30 01:00:47.971: INFO: Terminating ReplicationController externalsvc pods took: 100.249197ms May 30 01:00:55.539: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:00:55.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3021" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:18.644 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":264,"skipped":4329,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:00:55.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-9356 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9356 to expose endpoints map[] May 30 01:00:55.844: INFO: Get endpoints failed (79.997085ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 30 01:00:56.848: INFO: successfully validated that service endpoint-test2 in namespace services-9356 exposes endpoints map[] (1.084110811s elapsed) STEP: Creating pod pod1 in namespace services-9356 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9356 to expose endpoints map[pod1:[80]] May 30 01:01:00.962: INFO: successfully validated that service endpoint-test2 in namespace services-9356 exposes endpoints map[pod1:[80]] (4.106562591s elapsed) STEP: Creating pod pod2 in namespace services-9356 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9356 to expose endpoints map[pod1:[80] pod2:[80]] May 30 01:01:05.078: INFO: successfully validated that service endpoint-test2 in namespace services-9356 exposes endpoints map[pod1:[80] pod2:[80]] (4.109896119s elapsed) STEP: Deleting pod pod1 in namespace services-9356 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9356 to expose endpoints map[pod2:[80]] May 30 01:01:05.288: INFO: successfully validated that service endpoint-test2 in namespace services-9356 exposes endpoints map[pod2:[80]] (205.292321ms elapsed) STEP: Deleting pod pod2 in namespace services-9356 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9356 to expose endpoints map[] May 30 01:01:05.522: INFO: successfully validated that service endpoint-test2 in namespace services-9356 exposes endpoints map[] (228.931034ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:01:05.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9356" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:10.102 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":265,"skipped":4342,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:01:05.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 01:01:06.180: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 30 01:01:06.211: INFO: Number of nodes with available pods: 0 May 30 01:01:06.211: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 30 01:01:06.253: INFO: Number of nodes with available pods: 0 May 30 01:01:06.253: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:07.258: INFO: Number of nodes with available pods: 0 May 30 01:01:07.258: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:08.264: INFO: Number of nodes with available pods: 0 May 30 01:01:08.264: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:09.258: INFO: Number of nodes with available pods: 0 May 30 01:01:09.258: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:10.257: INFO: Number of nodes with available pods: 1 May 30 01:01:10.257: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 30 01:01:10.311: INFO: Number of nodes with available pods: 1 May 30 01:01:10.311: INFO: Number of running nodes: 0, number of available pods: 1 May 30 01:01:11.315: INFO: Number of nodes with available pods: 0 May 30 01:01:11.315: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 30 01:01:11.328: INFO: Number of nodes with available pods: 0 May 30 01:01:11.328: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:12.334: INFO: Number of nodes with available pods: 0 May 30 01:01:12.334: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:13.333: INFO: Number of nodes with available pods: 0 May 30 01:01:13.333: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:14.334: INFO: Number of nodes with available pods: 0 May 30 01:01:14.334: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:15.336: INFO: Number of nodes with available pods: 0 May 30 01:01:15.336: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:16.334: INFO: Number of nodes with available pods: 0 May 30 01:01:16.334: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:17.336: INFO: Number of nodes with available pods: 0 May 30 01:01:17.336: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:18.332: INFO: Number of nodes with available pods: 0 May 30 01:01:18.332: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:19.333: INFO: Number of nodes with available pods: 0 May 30 01:01:19.333: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:20.333: INFO: Number of nodes with available pods: 0 May 30 01:01:20.333: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:21.332: INFO: Number of nodes with available pods: 0 May 30 01:01:21.332: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:22.333: INFO: Number of nodes with available pods: 0 May 30 01:01:22.333: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:23.360: INFO: Number of nodes with available pods: 0 May 30 01:01:23.360: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:24.333: INFO: Number of nodes with available pods: 0 May 30 01:01:24.333: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:25.334: INFO: Number of nodes with available pods: 0 May 30 01:01:25.334: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:26.462: INFO: Number of nodes with available pods: 0 May 30 01:01:26.462: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:27.334: INFO: Number of nodes with available pods: 0 May 30 01:01:27.334: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:28.333: INFO: Number of nodes with available pods: 0 May 30 01:01:28.333: INFO: Node latest-worker is running more than one daemon pod May 30 01:01:29.332: INFO: Number of nodes with available pods: 1 May 30 01:01:29.332: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1935, will wait for the garbage collector to delete the pods May 30 01:01:29.396: INFO: Deleting DaemonSet.extensions daemon-set took: 5.673396ms May 30 01:01:29.697: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.25282ms May 30 01:01:34.929: INFO: Number of nodes with available pods: 0 May 30 01:01:34.929: INFO: Number of running nodes: 0, number of available pods: 0 May 30 01:01:34.932: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1935/daemonsets","resourceVersion":"8755817"},"items":null} May 30 01:01:34.935: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1935/pods","resourceVersion":"8755817"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:01:34.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1935" for this suite. • [SLOW TEST:29.268 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":266,"skipped":4360,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:01:34.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-426 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-426 STEP: creating replication controller externalsvc in namespace services-426 I0530 01:01:35.238833 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-426, replica count: 2 I0530 01:01:38.289352 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 01:01:41.289618 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 30 01:01:41.339: INFO: Creating new exec pod May 30 01:01:45.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-426 execpodm7fz7 -- /bin/sh -x -c nslookup clusterip-service' May 30 01:01:45.601: INFO: stderr: "I0530 01:01:45.522943 3727 log.go:172] (0xc000a3a0b0) (0xc00054d220) Create stream\nI0530 01:01:45.523002 3727 log.go:172] (0xc000a3a0b0) (0xc00054d220) Stream added, broadcasting: 1\nI0530 01:01:45.525281 3727 log.go:172] (0xc000a3a0b0) Reply frame received for 1\nI0530 01:01:45.525330 3727 log.go:172] (0xc000a3a0b0) (0xc0004f0dc0) Create stream\nI0530 01:01:45.525341 3727 log.go:172] (0xc000a3a0b0) (0xc0004f0dc0) Stream added, broadcasting: 3\nI0530 01:01:45.526406 3727 log.go:172] (0xc000a3a0b0) Reply frame received for 3\nI0530 01:01:45.526424 3727 log.go:172] (0xc000a3a0b0) (0xc0000f2e60) Create stream\nI0530 01:01:45.526430 3727 log.go:172] (0xc000a3a0b0) (0xc0000f2e60) Stream added, broadcasting: 5\nI0530 01:01:45.527273 3727 log.go:172] (0xc000a3a0b0) Reply frame received for 5\nI0530 01:01:45.582552 3727 log.go:172] (0xc000a3a0b0) Data frame received for 5\nI0530 01:01:45.582593 3727 log.go:172] (0xc0000f2e60) (5) Data frame handling\nI0530 01:01:45.582627 3727 log.go:172] (0xc0000f2e60) (5) Data frame sent\n+ nslookup clusterip-service\nI0530 01:01:45.591086 3727 log.go:172] (0xc000a3a0b0) Data frame received for 3\nI0530 01:01:45.591126 3727 log.go:172] (0xc0004f0dc0) (3) Data frame handling\nI0530 01:01:45.591174 3727 log.go:172] (0xc0004f0dc0) (3) Data frame sent\nI0530 01:01:45.592364 3727 log.go:172] (0xc000a3a0b0) Data frame received for 3\nI0530 01:01:45.592388 3727 log.go:172] (0xc0004f0dc0) (3) Data frame handling\nI0530 01:01:45.592404 3727 log.go:172] (0xc0004f0dc0) (3) Data frame sent\nI0530 01:01:45.592953 3727 log.go:172] (0xc000a3a0b0) Data frame received for 5\nI0530 01:01:45.592976 3727 log.go:172] (0xc000a3a0b0) Data frame received for 3\nI0530 01:01:45.592998 3727 log.go:172] (0xc0004f0dc0) (3) Data frame handling\nI0530 01:01:45.593018 3727 log.go:172] (0xc0000f2e60) (5) Data frame handling\nI0530 01:01:45.594944 3727 log.go:172] (0xc000a3a0b0) Data frame received for 1\nI0530 01:01:45.594975 3727 log.go:172] (0xc00054d220) (1) Data frame handling\nI0530 01:01:45.595017 3727 log.go:172] (0xc00054d220) (1) Data frame sent\nI0530 01:01:45.595040 3727 log.go:172] (0xc000a3a0b0) (0xc00054d220) Stream removed, broadcasting: 1\nI0530 01:01:45.595066 3727 log.go:172] (0xc000a3a0b0) Go away received\nI0530 01:01:45.595586 3727 log.go:172] (0xc000a3a0b0) (0xc00054d220) Stream removed, broadcasting: 1\nI0530 01:01:45.595610 3727 log.go:172] (0xc000a3a0b0) (0xc0004f0dc0) Stream removed, broadcasting: 3\nI0530 01:01:45.595622 3727 log.go:172] (0xc000a3a0b0) (0xc0000f2e60) Stream removed, broadcasting: 5\n" May 30 01:01:45.601: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-426.svc.cluster.local\tcanonical name = externalsvc.services-426.svc.cluster.local.\nName:\texternalsvc.services-426.svc.cluster.local\nAddress: 10.106.94.51\n\n" STEP: deleting ReplicationController externalsvc in namespace services-426, will wait for the garbage collector to delete the pods May 30 01:01:45.662: INFO: Deleting ReplicationController externalsvc took: 7.757872ms May 30 01:01:46.062: INFO: Terminating ReplicationController externalsvc pods took: 400.196138ms May 30 01:01:55.329: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:01:55.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-426" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.411 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":267,"skipped":4366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:01:55.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 01:01:55.511: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-235fcaf2-3b79-4823-8b00-3495e3f46bfc" in namespace "security-context-test-8544" to be "Succeeded or Failed" May 30 01:01:55.514: INFO: Pod "busybox-readonly-false-235fcaf2-3b79-4823-8b00-3495e3f46bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.62822ms May 30 01:01:57.518: INFO: Pod "busybox-readonly-false-235fcaf2-3b79-4823-8b00-3495e3f46bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007302277s May 30 01:01:59.523: INFO: Pod "busybox-readonly-false-235fcaf2-3b79-4823-8b00-3495e3f46bfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012240774s May 30 01:01:59.523: INFO: Pod "busybox-readonly-false-235fcaf2-3b79-4823-8b00-3495e3f46bfc" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:01:59.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8544" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":268,"skipped":4406,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:01:59.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 30 01:01:59.648: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:01:59.669: INFO: Number of nodes with available pods: 0 May 30 01:01:59.669: INFO: Node latest-worker is running more than one daemon pod May 30 01:02:00.673: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:00.677: INFO: Number of nodes with available pods: 0 May 30 01:02:00.677: INFO: Node latest-worker is running more than one daemon pod May 30 01:02:01.888: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:01.925: INFO: Number of nodes with available pods: 0 May 30 01:02:01.925: INFO: Node latest-worker is running more than one daemon pod May 30 01:02:02.674: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:02.678: INFO: Number of nodes with available pods: 0 May 30 01:02:02.678: INFO: Node latest-worker is running more than one daemon pod May 30 01:02:03.674: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:03.695: INFO: Number of nodes with available pods: 1 May 30 01:02:03.696: INFO: Node latest-worker is running more than one daemon pod May 30 01:02:04.708: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:04.712: INFO: Number of nodes with available pods: 2 May 30 01:02:04.712: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 30 01:02:04.808: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:04.821: INFO: Number of nodes with available pods: 1 May 30 01:02:04.821: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:05.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:05.831: INFO: Number of nodes with available pods: 1 May 30 01:02:05.831: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:06.826: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:06.830: INFO: Number of nodes with available pods: 1 May 30 01:02:06.830: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:07.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:07.831: INFO: Number of nodes with available pods: 1 May 30 01:02:07.832: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:08.828: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:08.833: INFO: Number of nodes with available pods: 1 May 30 01:02:08.833: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:09.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:09.832: INFO: Number of nodes with available pods: 1 May 30 01:02:09.832: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:10.826: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:10.830: INFO: Number of nodes with available pods: 1 May 30 01:02:10.830: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:11.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:11.832: INFO: Number of nodes with available pods: 1 May 30 01:02:11.832: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:12.826: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:12.831: INFO: Number of nodes with available pods: 1 May 30 01:02:12.831: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:13.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:13.832: INFO: Number of nodes with available pods: 1 May 30 01:02:13.832: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:14.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:14.832: INFO: Number of nodes with available pods: 1 May 30 01:02:14.832: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:15.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:15.831: INFO: Number of nodes with available pods: 1 May 30 01:02:15.831: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:16.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:16.832: INFO: Number of nodes with available pods: 1 May 30 01:02:16.832: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:17.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:17.830: INFO: Number of nodes with available pods: 1 May 30 01:02:17.830: INFO: Node latest-worker2 is running more than one daemon pod May 30 01:02:18.826: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 30 01:02:18.831: INFO: Number of nodes with available pods: 2 May 30 01:02:18.831: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9949, will wait for the garbage collector to delete the pods May 30 01:02:18.894: INFO: Deleting DaemonSet.extensions daemon-set took: 7.220033ms May 30 01:02:19.294: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.327557ms May 30 01:02:25.297: INFO: Number of nodes with available pods: 0 May 30 01:02:25.297: INFO: Number of running nodes: 0, number of available pods: 0 May 30 01:02:25.300: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9949/daemonsets","resourceVersion":"8756162"},"items":null} May 30 01:02:25.302: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9949/pods","resourceVersion":"8756162"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:02:25.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9949" for this suite. • [SLOW TEST:25.788 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":269,"skipped":4431,"failed":0} [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:02:25.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 01:02:25.553: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a8d74e39-c40f-4019-ba1d-ca4032cf00b3", Controller:(*bool)(0xc005b6b312), BlockOwnerDeletion:(*bool)(0xc005b6b313)}} May 30 01:02:25.582: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"75b701c4-d68f-4996-9c2e-bb75e96c8783", Controller:(*bool)(0xc0040e0cc2), BlockOwnerDeletion:(*bool)(0xc0040e0cc3)}} May 30 01:02:25.618: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1445eff2-2646-4b5c-9f80-968960f783d8", Controller:(*bool)(0xc0040e0eaa), BlockOwnerDeletion:(*bool)(0xc0040e0eab)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:02:30.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1971" for this suite. • [SLOW TEST:5.378 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":270,"skipped":4431,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:02:30.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:02:30.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6642" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":271,"skipped":4451,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:02:30.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 30 01:02:31.083: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 30 01:02:31.103: INFO: Waiting for terminating namespaces to be deleted... May 30 01:02:31.106: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 30 01:02:31.109: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 30 01:02:31.110: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 30 01:02:31.110: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 30 01:02:31.110: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 30 01:02:31.110: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 30 01:02:31.110: INFO: Container kindnet-cni ready: true, restart count 2 May 30 01:02:31.110: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 30 01:02:31.110: INFO: Container kube-proxy ready: true, restart count 0 May 30 01:02:31.110: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 30 01:02:31.113: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 30 01:02:31.113: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 30 01:02:31.113: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 30 01:02:31.113: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 30 01:02:31.114: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 30 01:02:31.114: INFO: Container kindnet-cni ready: true, restart count 2 May 30 01:02:31.114: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 30 01:02:31.114: INFO: Container kube-proxy ready: true, restart count 0 May 30 01:02:31.114: INFO: pod-qos-class-de272f5d-c6f8-4e9e-a7e4-3c0fa6446af4 from pods-6642 started at 2020-05-30 01:02:30 +0000 UTC (1 container statuses recorded) May 30 01:02:31.114: INFO: Container agnhost ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c253a587-4f58-496f-8bd9-0d9cab4383df 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-c253a587-4f58-496f-8bd9-0d9cab4383df off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-c253a587-4f58-496f-8bd9-0d9cab4383df [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:07:39.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6722" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.391 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":272,"skipped":4482,"failed":0} SSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:07:39.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5085 STEP: creating service affinity-nodeport in namespace services-5085 STEP: creating replication controller affinity-nodeport in namespace services-5085 I0530 01:07:39.568032 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-5085, replica count: 3 I0530 01:07:42.618487 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 01:07:45.618706 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 01:07:45.626: INFO: Creating new exec pod May 30 01:07:50.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5085 execpod-affinity9cbt9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 30 01:07:53.592: INFO: stderr: "I0530 01:07:53.486202 3749 log.go:172] (0xc000ea00b0) (0xc0007657c0) Create stream\nI0530 01:07:53.486237 3749 log.go:172] (0xc000ea00b0) (0xc0007657c0) Stream added, broadcasting: 1\nI0530 01:07:53.488628 3749 log.go:172] (0xc000ea00b0) Reply frame received for 1\nI0530 01:07:53.488712 3749 log.go:172] (0xc000ea00b0) (0xc0007160a0) Create stream\nI0530 01:07:53.488763 3749 log.go:172] (0xc000ea00b0) (0xc0007160a0) Stream added, broadcasting: 3\nI0530 01:07:53.489907 3749 log.go:172] (0xc000ea00b0) Reply frame received for 3\nI0530 01:07:53.489950 3749 log.go:172] (0xc000ea00b0) (0xc000765860) Create stream\nI0530 01:07:53.489961 3749 log.go:172] (0xc000ea00b0) (0xc000765860) Stream added, broadcasting: 5\nI0530 01:07:53.490958 3749 log.go:172] (0xc000ea00b0) Reply frame received for 5\nI0530 01:07:53.583147 3749 log.go:172] (0xc000ea00b0) Data frame received for 3\nI0530 01:07:53.583196 3749 log.go:172] (0xc0007160a0) (3) Data frame handling\nI0530 01:07:53.583223 3749 log.go:172] (0xc000ea00b0) Data frame received for 5\nI0530 01:07:53.583234 3749 log.go:172] (0xc000765860) (5) Data frame handling\nI0530 01:07:53.583247 3749 log.go:172] (0xc000765860) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0530 01:07:53.583264 3749 log.go:172] (0xc000ea00b0) Data frame received for 5\nI0530 01:07:53.583291 3749 log.go:172] (0xc000765860) (5) Data frame handling\nI0530 01:07:53.584538 3749 log.go:172] (0xc000ea00b0) Data frame received for 1\nI0530 01:07:53.584552 3749 log.go:172] (0xc0007657c0) (1) Data frame handling\nI0530 01:07:53.584561 3749 log.go:172] (0xc0007657c0) (1) Data frame sent\nI0530 01:07:53.584574 3749 log.go:172] (0xc000ea00b0) (0xc0007657c0) Stream removed, broadcasting: 1\nI0530 01:07:53.584593 3749 log.go:172] (0xc000ea00b0) Go away received\nI0530 01:07:53.584994 3749 log.go:172] (0xc000ea00b0) (0xc0007657c0) Stream removed, broadcasting: 1\nI0530 01:07:53.585016 3749 log.go:172] (0xc000ea00b0) (0xc0007160a0) Stream removed, broadcasting: 3\nI0530 01:07:53.585027 3749 log.go:172] (0xc000ea00b0) (0xc000765860) Stream removed, broadcasting: 5\n" May 30 01:07:53.592: INFO: stdout: "" May 30 01:07:53.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5085 execpod-affinity9cbt9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.96.58 80' May 30 01:07:53.814: INFO: stderr: "I0530 01:07:53.721663 3784 log.go:172] (0xc00003a4d0) (0xc000634be0) Create stream\nI0530 01:07:53.721731 3784 log.go:172] (0xc00003a4d0) (0xc000634be0) Stream added, broadcasting: 1\nI0530 01:07:53.723901 3784 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0530 01:07:53.723959 3784 log.go:172] (0xc00003a4d0) (0xc000588f00) Create stream\nI0530 01:07:53.723974 3784 log.go:172] (0xc00003a4d0) (0xc000588f00) Stream added, broadcasting: 3\nI0530 01:07:53.725091 3784 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0530 01:07:53.725222 3784 log.go:172] (0xc00003a4d0) (0xc000562320) Create stream\nI0530 01:07:53.725234 3784 log.go:172] (0xc00003a4d0) (0xc000562320) Stream added, broadcasting: 5\nI0530 01:07:53.726365 3784 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0530 01:07:53.807337 3784 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0530 01:07:53.807567 3784 log.go:172] (0xc000562320) (5) Data frame handling\nI0530 01:07:53.807673 3784 log.go:172] (0xc000562320) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.96.58 80\nConnection to 10.96.96.58 80 port [tcp/http] succeeded!\nI0530 01:07:53.807713 3784 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0530 01:07:53.807728 3784 log.go:172] (0xc000562320) (5) Data frame handling\nI0530 01:07:53.807759 3784 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0530 01:07:53.807776 3784 log.go:172] (0xc000588f00) (3) Data frame handling\nI0530 01:07:53.809025 3784 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0530 01:07:53.809056 3784 log.go:172] (0xc000634be0) (1) Data frame handling\nI0530 01:07:53.809071 3784 log.go:172] (0xc000634be0) (1) Data frame sent\nI0530 01:07:53.809088 3784 log.go:172] (0xc00003a4d0) (0xc000634be0) Stream removed, broadcasting: 1\nI0530 01:07:53.809434 3784 log.go:172] (0xc00003a4d0) Go away received\nI0530 01:07:53.809759 3784 log.go:172] (0xc00003a4d0) (0xc000634be0) Stream removed, broadcasting: 1\nI0530 01:07:53.809782 3784 log.go:172] (0xc00003a4d0) (0xc000588f00) Stream removed, broadcasting: 3\nI0530 01:07:53.809795 3784 log.go:172] (0xc00003a4d0) (0xc000562320) Stream removed, broadcasting: 5\n" May 30 01:07:53.814: INFO: stdout: "" May 30 01:07:53.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5085 execpod-affinity9cbt9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31394' May 30 01:07:54.050: INFO: stderr: "I0530 01:07:53.955061 3804 log.go:172] (0xc0004ba160) (0xc000502dc0) Create stream\nI0530 01:07:53.955138 3804 log.go:172] (0xc0004ba160) (0xc000502dc0) Stream added, broadcasting: 1\nI0530 01:07:53.958412 3804 log.go:172] (0xc0004ba160) Reply frame received for 1\nI0530 01:07:53.958457 3804 log.go:172] (0xc0004ba160) (0xc000652280) Create stream\nI0530 01:07:53.958471 3804 log.go:172] (0xc0004ba160) (0xc000652280) Stream added, broadcasting: 3\nI0530 01:07:53.959597 3804 log.go:172] (0xc0004ba160) Reply frame received for 3\nI0530 01:07:53.959634 3804 log.go:172] (0xc0004ba160) (0xc000684fa0) Create stream\nI0530 01:07:53.959647 3804 log.go:172] (0xc0004ba160) (0xc000684fa0) Stream added, broadcasting: 5\nI0530 01:07:53.960640 3804 log.go:172] (0xc0004ba160) Reply frame received for 5\nI0530 01:07:54.042576 3804 log.go:172] (0xc0004ba160) Data frame received for 3\nI0530 01:07:54.042623 3804 log.go:172] (0xc000652280) (3) Data frame handling\nI0530 01:07:54.042649 3804 log.go:172] (0xc0004ba160) Data frame received for 5\nI0530 01:07:54.042660 3804 log.go:172] (0xc000684fa0) (5) Data frame handling\nI0530 01:07:54.042679 3804 log.go:172] (0xc000684fa0) (5) Data frame sent\nI0530 01:07:54.042694 3804 log.go:172] (0xc0004ba160) Data frame received for 5\nI0530 01:07:54.042705 3804 log.go:172] (0xc000684fa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31394\nConnection to 172.17.0.13 31394 port [tcp/31394] succeeded!\nI0530 01:07:54.043929 3804 log.go:172] (0xc0004ba160) Data frame received for 1\nI0530 01:07:54.043950 3804 log.go:172] (0xc000502dc0) (1) Data frame handling\nI0530 01:07:54.043978 3804 log.go:172] (0xc000502dc0) (1) Data frame sent\nI0530 01:07:54.044000 3804 log.go:172] (0xc0004ba160) (0xc000502dc0) Stream removed, broadcasting: 1\nI0530 01:07:54.044022 3804 log.go:172] (0xc0004ba160) Go away received\nI0530 01:07:54.044462 3804 log.go:172] (0xc0004ba160) (0xc000502dc0) Stream removed, broadcasting: 1\nI0530 01:07:54.044509 3804 log.go:172] (0xc0004ba160) (0xc000652280) Stream removed, broadcasting: 3\nI0530 01:07:54.044533 3804 log.go:172] (0xc0004ba160) (0xc000684fa0) Stream removed, broadcasting: 5\n" May 30 01:07:54.050: INFO: stdout: "" May 30 01:07:54.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5085 execpod-affinity9cbt9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31394' May 30 01:07:54.264: INFO: stderr: "I0530 01:07:54.182849 3825 log.go:172] (0xc00003a580) (0xc0004721e0) Create stream\nI0530 01:07:54.182922 3825 log.go:172] (0xc00003a580) (0xc0004721e0) Stream added, broadcasting: 1\nI0530 01:07:54.186293 3825 log.go:172] (0xc00003a580) Reply frame received for 1\nI0530 01:07:54.186331 3825 log.go:172] (0xc00003a580) (0xc000442dc0) Create stream\nI0530 01:07:54.186343 3825 log.go:172] (0xc00003a580) (0xc000442dc0) Stream added, broadcasting: 3\nI0530 01:07:54.187561 3825 log.go:172] (0xc00003a580) Reply frame received for 3\nI0530 01:07:54.187627 3825 log.go:172] (0xc00003a580) (0xc00023a0a0) Create stream\nI0530 01:07:54.187652 3825 log.go:172] (0xc00003a580) (0xc00023a0a0) Stream added, broadcasting: 5\nI0530 01:07:54.188787 3825 log.go:172] (0xc00003a580) Reply frame received for 5\nI0530 01:07:54.256364 3825 log.go:172] (0xc00003a580) Data frame received for 3\nI0530 01:07:54.256418 3825 log.go:172] (0xc000442dc0) (3) Data frame handling\nI0530 01:07:54.256443 3825 log.go:172] (0xc00003a580) Data frame received for 5\nI0530 01:07:54.256453 3825 log.go:172] (0xc00023a0a0) (5) Data frame handling\nI0530 01:07:54.256464 3825 log.go:172] (0xc00023a0a0) (5) Data frame sent\nI0530 01:07:54.256474 3825 log.go:172] (0xc00003a580) Data frame received for 5\nI0530 01:07:54.256483 3825 log.go:172] (0xc00023a0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31394\nConnection to 172.17.0.12 31394 port [tcp/31394] succeeded!\nI0530 01:07:54.258382 3825 log.go:172] (0xc00003a580) Data frame received for 1\nI0530 01:07:54.258405 3825 log.go:172] (0xc0004721e0) (1) Data frame handling\nI0530 01:07:54.258414 3825 log.go:172] (0xc0004721e0) (1) Data frame sent\nI0530 01:07:54.258424 3825 log.go:172] (0xc00003a580) (0xc0004721e0) Stream removed, broadcasting: 1\nI0530 01:07:54.258435 3825 log.go:172] (0xc00003a580) Go away received\nI0530 01:07:54.258837 3825 log.go:172] (0xc00003a580) (0xc0004721e0) Stream removed, broadcasting: 1\nI0530 01:07:54.258861 3825 log.go:172] (0xc00003a580) (0xc000442dc0) Stream removed, broadcasting: 3\nI0530 01:07:54.258879 3825 log.go:172] (0xc00003a580) (0xc00023a0a0) Stream removed, broadcasting: 5\n" May 30 01:07:54.264: INFO: stdout: "" May 30 01:07:54.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5085 execpod-affinity9cbt9 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31394/ ; done' May 30 01:07:54.538: INFO: stderr: "I0530 01:07:54.388045 3847 log.go:172] (0xc0005268f0) (0xc0001ea280) Create stream\nI0530 01:07:54.388101 3847 log.go:172] (0xc0005268f0) (0xc0001ea280) Stream added, broadcasting: 1\nI0530 01:07:54.390964 3847 log.go:172] (0xc0005268f0) Reply frame received for 1\nI0530 01:07:54.390997 3847 log.go:172] (0xc0005268f0) (0xc00012adc0) Create stream\nI0530 01:07:54.391007 3847 log.go:172] (0xc0005268f0) (0xc00012adc0) Stream added, broadcasting: 3\nI0530 01:07:54.391857 3847 log.go:172] (0xc0005268f0) Reply frame received for 3\nI0530 01:07:54.391888 3847 log.go:172] (0xc0005268f0) (0xc0001eb220) Create stream\nI0530 01:07:54.391899 3847 log.go:172] (0xc0005268f0) (0xc0001eb220) Stream added, broadcasting: 5\nI0530 01:07:54.392780 3847 log.go:172] (0xc0005268f0) Reply frame received for 5\nI0530 01:07:54.447830 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.447878 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.447909 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.447966 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.447996 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.448019 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.455425 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.455445 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.455477 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.456329 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.456363 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.456379 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.456398 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.456411 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.456423 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.462257 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.462279 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.462299 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.463581 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.463608 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.463623 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.463640 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.463650 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.463661 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.471364 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.471391 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.471409 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.471532 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.471550 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.471558 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.471569 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.471575 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.471582 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.476229 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.476244 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.476256 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.476738 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.476763 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.476775 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.476792 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.476800 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.476809 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.480393 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.480420 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.480440 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.480853 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.480874 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.480884 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.480901 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.480909 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.480917 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.485563 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.485581 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.485590 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.485922 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.485958 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.485979 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.486004 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.486019 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.486042 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.489795 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.489818 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.489836 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.490479 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.490507 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.490519 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.490533 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.490541 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.490548 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.494293 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.494320 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.494341 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.494747 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.494776 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.494788 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.494802 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.494809 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.494816 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.498920 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.498949 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.498973 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.499503 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.499536 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.499568 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.499592 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.499614 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.499635 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.503577 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.503600 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.503626 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.503865 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.503892 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.503913 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.504123 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.504135 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.504144 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.507490 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.507513 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.507539 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.507866 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.507880 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.507887 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.507919 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.507945 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.507972 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.511976 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.512005 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.512024 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.512276 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.512311 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.512333 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.512402 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.512434 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.512461 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.516601 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.516629 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.516657 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.516698 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.516713 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.516724 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.516757 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.516773 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.516787 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.521078 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.521104 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.521310 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.521570 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.521595 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.521609 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.521629 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.521640 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.521650 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.525601 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.525624 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.525643 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.525933 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.525943 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.525949 3847 log.go:172] (0xc0001eb220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31394/\nI0530 01:07:54.526045 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.526054 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.526061 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.530449 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.530478 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.530508 3847 log.go:172] (0xc00012adc0) (3) Data frame sent\nI0530 01:07:54.531144 3847 log.go:172] (0xc0005268f0) Data frame received for 5\nI0530 01:07:54.531180 3847 log.go:172] (0xc0001eb220) (5) Data frame handling\nI0530 01:07:54.531303 3847 log.go:172] (0xc0005268f0) Data frame received for 3\nI0530 01:07:54.531329 3847 log.go:172] (0xc00012adc0) (3) Data frame handling\nI0530 01:07:54.532666 3847 log.go:172] (0xc0005268f0) Data frame received for 1\nI0530 01:07:54.532689 3847 log.go:172] (0xc0001ea280) (1) Data frame handling\nI0530 01:07:54.532706 3847 log.go:172] (0xc0001ea280) (1) Data frame sent\nI0530 01:07:54.532735 3847 log.go:172] (0xc0005268f0) (0xc0001ea280) Stream removed, broadcasting: 1\nI0530 01:07:54.532774 3847 log.go:172] (0xc0005268f0) Go away received\nI0530 01:07:54.533294 3847 log.go:172] (0xc0005268f0) (0xc0001ea280) Stream removed, broadcasting: 1\nI0530 01:07:54.533315 3847 log.go:172] (0xc0005268f0) (0xc00012adc0) Stream removed, broadcasting: 3\nI0530 01:07:54.533324 3847 log.go:172] (0xc0005268f0) (0xc0001eb220) Stream removed, broadcasting: 5\n" May 30 01:07:54.538: INFO: stdout: "\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht\naffinity-nodeport-bmwht" May 30 01:07:54.538: INFO: Received response from host: May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Received response from host: affinity-nodeport-bmwht May 30 01:07:54.538: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-5085, will wait for the garbage collector to delete the pods May 30 01:07:54.665: INFO: Deleting ReplicationController affinity-nodeport took: 24.117329ms May 30 01:07:55.365: INFO: Terminating ReplicationController affinity-nodeport pods took: 700.236907ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:08:05.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5085" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:26.004 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":273,"skipped":4488,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:08:05.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 30 01:08:09.990: INFO: Successfully updated pod "annotationupdate703c72ce-037b-4c14-94e5-f8337ea09e0c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:08:12.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1268" for this suite. • [SLOW TEST:6.704 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":274,"skipped":4540,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:08:12.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 01:08:12.134: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:08:13.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5545" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":275,"skipped":4540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:08:13.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 01:08:13.483: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 30 01:08:16.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8986 create -f -' May 30 01:08:20.704: INFO: stderr: "" May 30 01:08:20.704: INFO: stdout: "e2e-test-crd-publish-openapi-4958-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 30 01:08:20.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8986 delete e2e-test-crd-publish-openapi-4958-crds test-cr' May 30 01:08:20.831: INFO: stderr: "" May 30 01:08:20.831: INFO: stdout: "e2e-test-crd-publish-openapi-4958-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 30 01:08:20.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8986 apply -f -' May 30 01:08:21.133: INFO: stderr: "" May 30 01:08:21.133: INFO: stdout: "e2e-test-crd-publish-openapi-4958-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 30 01:08:21.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8986 delete e2e-test-crd-publish-openapi-4958-crds test-cr' May 30 01:08:21.237: INFO: stderr: "" May 30 01:08:21.237: INFO: stdout: "e2e-test-crd-publish-openapi-4958-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 30 01:08:21.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4958-crds' May 30 01:08:21.486: INFO: stderr: "" May 30 01:08:21.486: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4958-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:08:23.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8986" for this suite. • [SLOW TEST:10.063 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":276,"skipped":4563,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:08:23.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 01:08:23.533: INFO: Waiting up to 5m0s for pod "busybox-user-65534-9030d223-84cf-404b-a730-d8a74526f5dc" in namespace "security-context-test-6387" to be "Succeeded or Failed" May 30 01:08:23.551: INFO: Pod "busybox-user-65534-9030d223-84cf-404b-a730-d8a74526f5dc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.885865ms May 30 01:08:25.657: INFO: Pod "busybox-user-65534-9030d223-84cf-404b-a730-d8a74526f5dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124202131s May 30 01:08:27.662: INFO: Pod "busybox-user-65534-9030d223-84cf-404b-a730-d8a74526f5dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128792999s May 30 01:08:27.662: INFO: Pod "busybox-user-65534-9030d223-84cf-404b-a730-d8a74526f5dc" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:08:27.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6387" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":277,"skipped":4565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:08:27.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-ad55234e-6004-451c-ab71-0783930b218c STEP: Creating secret with name s-test-opt-upd-61b077b1-4256-4a15-897e-58263d4f9cac STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ad55234e-6004-451c-ab71-0783930b218c STEP: Updating secret s-test-opt-upd-61b077b1-4256-4a15-897e-58263d4f9cac STEP: Creating secret with name s-test-opt-create-ec5f3135-5965-44bf-be9b-52510099951b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:08:38.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9985" for this suite. • [SLOW TEST:10.389 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":278,"skipped":4607,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:08:38.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6550 STEP: creating a selector STEP: Creating the service pods in kubernetes May 30 01:08:38.149: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 30 01:08:38.275: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 30 01:08:40.279: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 30 01:08:42.280: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 30 01:08:44.303: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 01:08:46.280: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 01:08:48.280: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 01:08:50.280: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 01:08:52.280: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 01:08:54.280: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 01:08:56.279: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 01:08:58.280: INFO: The status of Pod netserver-0 is Running (Ready = false) May 30 01:09:00.280: INFO: The status of Pod netserver-0 is Running (Ready = true) May 30 01:09:00.286: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 30 01:09:04.349: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.9:8080/dial?request=hostname&protocol=http&host=10.244.1.8&port=8080&tries=1'] Namespace:pod-network-test-6550 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 01:09:04.349: INFO: >>> kubeConfig: /root/.kube/config I0530 01:09:04.386951 7 log.go:172] (0xc0027758c0) (0xc0010ec320) Create stream I0530 01:09:04.386986 7 log.go:172] (0xc0027758c0) (0xc0010ec320) Stream added, broadcasting: 1 I0530 01:09:04.388858 7 log.go:172] (0xc0027758c0) Reply frame received for 1 I0530 01:09:04.388918 7 log.go:172] (0xc0027758c0) (0xc000a568c0) Create stream I0530 01:09:04.388935 7 log.go:172] (0xc0027758c0) (0xc000a568c0) Stream added, broadcasting: 3 I0530 01:09:04.390184 7 log.go:172] (0xc0027758c0) Reply frame received for 3 I0530 01:09:04.390229 7 log.go:172] (0xc0027758c0) (0xc0011f1900) Create stream I0530 01:09:04.390242 7 log.go:172] (0xc0027758c0) (0xc0011f1900) Stream added, broadcasting: 5 I0530 01:09:04.391103 7 log.go:172] (0xc0027758c0) Reply frame received for 5 I0530 01:09:04.510489 7 log.go:172] (0xc0027758c0) Data frame received for 3 I0530 01:09:04.510520 7 log.go:172] (0xc000a568c0) (3) Data frame handling I0530 01:09:04.510539 7 log.go:172] (0xc000a568c0) (3) Data frame sent I0530 01:09:04.511152 7 log.go:172] (0xc0027758c0) Data frame received for 3 I0530 01:09:04.511199 7 log.go:172] (0xc000a568c0) (3) Data frame handling I0530 01:09:04.511233 7 log.go:172] (0xc0027758c0) Data frame received for 5 I0530 01:09:04.511255 7 log.go:172] (0xc0011f1900) (5) Data frame handling I0530 01:09:04.512775 7 log.go:172] (0xc0027758c0) Data frame received for 1 I0530 01:09:04.512810 7 log.go:172] (0xc0010ec320) (1) Data frame handling I0530 01:09:04.512832 7 log.go:172] (0xc0010ec320) (1) Data frame sent I0530 01:09:04.512866 7 log.go:172] (0xc0027758c0) (0xc0010ec320) Stream removed, broadcasting: 1 I0530 01:09:04.512902 7 log.go:172] (0xc0027758c0) Go away received I0530 01:09:04.512962 7 log.go:172] (0xc0027758c0) (0xc0010ec320) Stream removed, broadcasting: 1 I0530 01:09:04.512998 7 log.go:172] (0xc0027758c0) (0xc000a568c0) Stream removed, broadcasting: 3 I0530 01:09:04.513021 7 log.go:172] (0xc0027758c0) (0xc0011f1900) Stream removed, broadcasting: 5 May 30 01:09:04.513: INFO: Waiting for responses: map[] May 30 01:09:04.516: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.9:8080/dial?request=hostname&protocol=http&host=10.244.2.238&port=8080&tries=1'] Namespace:pod-network-test-6550 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 30 01:09:04.516: INFO: >>> kubeConfig: /root/.kube/config I0530 01:09:04.555384 7 log.go:172] (0xc002f6e0b0) (0xc000a577c0) Create stream I0530 01:09:04.555403 7 log.go:172] (0xc002f6e0b0) (0xc000a577c0) Stream added, broadcasting: 1 I0530 01:09:04.557435 7 log.go:172] (0xc002f6e0b0) Reply frame received for 1 I0530 01:09:04.557476 7 log.go:172] (0xc002f6e0b0) (0xc0010ec8c0) Create stream I0530 01:09:04.557492 7 log.go:172] (0xc002f6e0b0) (0xc0010ec8c0) Stream added, broadcasting: 3 I0530 01:09:04.558511 7 log.go:172] (0xc002f6e0b0) Reply frame received for 3 I0530 01:09:04.558562 7 log.go:172] (0xc002f6e0b0) (0xc0011f1ea0) Create stream I0530 01:09:04.558593 7 log.go:172] (0xc002f6e0b0) (0xc0011f1ea0) Stream added, broadcasting: 5 I0530 01:09:04.559371 7 log.go:172] (0xc002f6e0b0) Reply frame received for 5 I0530 01:09:04.633630 7 log.go:172] (0xc002f6e0b0) Data frame received for 3 I0530 01:09:04.633666 7 log.go:172] (0xc0010ec8c0) (3) Data frame handling I0530 01:09:04.633690 7 log.go:172] (0xc0010ec8c0) (3) Data frame sent I0530 01:09:04.634036 7 log.go:172] (0xc002f6e0b0) Data frame received for 3 I0530 01:09:04.634055 7 log.go:172] (0xc0010ec8c0) (3) Data frame handling I0530 01:09:04.634278 7 log.go:172] (0xc002f6e0b0) Data frame received for 5 I0530 01:09:04.634310 7 log.go:172] (0xc0011f1ea0) (5) Data frame handling I0530 01:09:04.635895 7 log.go:172] (0xc002f6e0b0) Data frame received for 1 I0530 01:09:04.635928 7 log.go:172] (0xc000a577c0) (1) Data frame handling I0530 01:09:04.635951 7 log.go:172] (0xc000a577c0) (1) Data frame sent I0530 01:09:04.635991 7 log.go:172] (0xc002f6e0b0) (0xc000a577c0) Stream removed, broadcasting: 1 I0530 01:09:04.636016 7 log.go:172] (0xc002f6e0b0) Go away received I0530 01:09:04.636141 7 log.go:172] (0xc002f6e0b0) (0xc000a577c0) Stream removed, broadcasting: 1 I0530 01:09:04.636178 7 log.go:172] (0xc002f6e0b0) (0xc0010ec8c0) Stream removed, broadcasting: 3 I0530 01:09:04.636202 7 log.go:172] (0xc002f6e0b0) (0xc0011f1ea0) Stream removed, broadcasting: 5 May 30 01:09:04.636: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:09:04.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6550" for this suite. • [SLOW TEST:26.583 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":279,"skipped":4622,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:09:04.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 30 01:09:04.742: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:09:05.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2249" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":280,"skipped":4625,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:09:05.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:09:05.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-4278" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":281,"skipped":4629,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:09:05.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 30 01:09:05.973: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5569" to be "Succeeded or Failed" May 30 01:09:06.013: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 40.424529ms May 30 01:09:08.017: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043936971s May 30 01:09:10.031: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058253518s May 30 01:09:12.164: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.191147508s STEP: Saw pod success May 30 01:09:12.164: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 30 01:09:12.167: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 30 01:09:12.291: INFO: Waiting for pod pod-host-path-test to disappear May 30 01:09:12.318: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:09:12.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5569" for this suite. • [SLOW TEST:6.509 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":282,"skipped":4645,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:09:12.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:09:29.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2678" for this suite. • [SLOW TEST:17.135 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":283,"skipped":4652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:09:29.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-k4wt STEP: Creating a pod to test atomic-volume-subpath May 30 01:09:29.623: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-k4wt" in namespace "subpath-2516" to be "Succeeded or Failed" May 30 01:09:29.630: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.862462ms May 30 01:09:31.634: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010588218s May 30 01:09:33.653: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Running", Reason="", readiness=true. Elapsed: 4.0297913s May 30 01:09:35.658: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Running", Reason="", readiness=true. Elapsed: 6.034625324s May 30 01:09:37.662: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Running", Reason="", readiness=true. Elapsed: 8.038525114s May 30 01:09:39.667: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Running", Reason="", readiness=true. Elapsed: 10.043122776s May 30 01:09:41.671: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Running", Reason="", readiness=true. Elapsed: 12.047804121s May 30 01:09:43.676: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Running", Reason="", readiness=true. Elapsed: 14.052336782s May 30 01:09:45.681: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Running", Reason="", readiness=true. Elapsed: 16.057127344s May 30 01:09:47.685: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Running", Reason="", readiness=true. Elapsed: 18.061666048s May 30 01:09:49.689: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Running", Reason="", readiness=true. Elapsed: 20.065967897s May 30 01:09:51.694: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Running", Reason="", readiness=true. Elapsed: 22.070897361s May 30 01:09:53.712: INFO: Pod "pod-subpath-test-secret-k4wt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.088151733s STEP: Saw pod success May 30 01:09:53.712: INFO: Pod "pod-subpath-test-secret-k4wt" satisfied condition "Succeeded or Failed" May 30 01:09:53.742: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-k4wt container test-container-subpath-secret-k4wt: STEP: delete the pod May 30 01:09:53.817: INFO: Waiting for pod pod-subpath-test-secret-k4wt to disappear May 30 01:09:53.828: INFO: Pod pod-subpath-test-secret-k4wt no longer exists STEP: Deleting pod pod-subpath-test-secret-k4wt May 30 01:09:53.828: INFO: Deleting pod "pod-subpath-test-secret-k4wt" in namespace "subpath-2516" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:09:53.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2516" for this suite. • [SLOW TEST:24.338 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":284,"skipped":4684,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:09:53.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 30 01:09:54.166: INFO: Waiting up to 5m0s for pod "pod-cdc8773f-849b-4b4b-8421-be296e10fff9" in namespace "emptydir-465" to be "Succeeded or Failed" May 30 01:09:54.176: INFO: Pod "pod-cdc8773f-849b-4b4b-8421-be296e10fff9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026264ms May 30 01:09:56.181: INFO: Pod "pod-cdc8773f-849b-4b4b-8421-be296e10fff9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014726016s May 30 01:09:58.186: INFO: Pod "pod-cdc8773f-849b-4b4b-8421-be296e10fff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019293633s STEP: Saw pod success May 30 01:09:58.186: INFO: Pod "pod-cdc8773f-849b-4b4b-8421-be296e10fff9" satisfied condition "Succeeded or Failed" May 30 01:09:58.189: INFO: Trying to get logs from node latest-worker pod pod-cdc8773f-849b-4b4b-8421-be296e10fff9 container test-container: STEP: delete the pod May 30 01:09:58.213: INFO: Waiting for pod pod-cdc8773f-849b-4b4b-8421-be296e10fff9 to disappear May 30 01:09:58.217: INFO: Pod pod-cdc8773f-849b-4b4b-8421-be296e10fff9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:09:58.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-465" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":285,"skipped":4694,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:09:58.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 30 01:09:58.376: INFO: Waiting up to 5m0s for pod "downward-api-ada94a10-0af2-4988-831b-c9ad66c4d5bc" in namespace "downward-api-3979" to be "Succeeded or Failed" May 30 01:09:58.410: INFO: Pod "downward-api-ada94a10-0af2-4988-831b-c9ad66c4d5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.503513ms May 30 01:10:00.472: INFO: Pod "downward-api-ada94a10-0af2-4988-831b-c9ad66c4d5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096452941s May 30 01:10:02.476: INFO: Pod "downward-api-ada94a10-0af2-4988-831b-c9ad66c4d5bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100377887s STEP: Saw pod success May 30 01:10:02.476: INFO: Pod "downward-api-ada94a10-0af2-4988-831b-c9ad66c4d5bc" satisfied condition "Succeeded or Failed" May 30 01:10:02.479: INFO: Trying to get logs from node latest-worker2 pod downward-api-ada94a10-0af2-4988-831b-c9ad66c4d5bc container dapi-container: STEP: delete the pod May 30 01:10:02.502: INFO: Waiting for pod downward-api-ada94a10-0af2-4988-831b-c9ad66c4d5bc to disappear May 30 01:10:02.512: INFO: Pod downward-api-ada94a10-0af2-4988-831b-c9ad66c4d5bc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:10:02.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3979" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":286,"skipped":4709,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:10:02.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-9875c0a6-89cb-4fd6-b2fa-1bdc15a159cf STEP: Creating a pod to test consume configMaps May 30 01:10:02.652: INFO: Waiting up to 5m0s for pod "pod-configmaps-443565a9-4515-4e39-8d54-d49f56ce4d5d" in namespace "configmap-4465" to be "Succeeded or Failed" May 30 01:10:02.671: INFO: Pod "pod-configmaps-443565a9-4515-4e39-8d54-d49f56ce4d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.882419ms May 30 01:10:04.711: INFO: Pod "pod-configmaps-443565a9-4515-4e39-8d54-d49f56ce4d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05891323s May 30 01:10:06.729: INFO: Pod "pod-configmaps-443565a9-4515-4e39-8d54-d49f56ce4d5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076562604s STEP: Saw pod success May 30 01:10:06.729: INFO: Pod "pod-configmaps-443565a9-4515-4e39-8d54-d49f56ce4d5d" satisfied condition "Succeeded or Failed" May 30 01:10:06.732: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-443565a9-4515-4e39-8d54-d49f56ce4d5d container configmap-volume-test: STEP: delete the pod May 30 01:10:06.777: INFO: Waiting for pod pod-configmaps-443565a9-4515-4e39-8d54-d49f56ce4d5d to disappear May 30 01:10:06.788: INFO: Pod pod-configmaps-443565a9-4515-4e39-8d54-d49f56ce4d5d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:10:06.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4465" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":287,"skipped":4721,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 30 01:10:06.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-bqjl4 in namespace proxy-6727 I0530 01:10:07.088930 7 runners.go:190] Created replication controller with name: proxy-service-bqjl4, namespace: proxy-6727, replica count: 1 I0530 01:10:08.139438 7 runners.go:190] proxy-service-bqjl4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 01:10:09.139814 7 runners.go:190] proxy-service-bqjl4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 01:10:10.140089 7 runners.go:190] proxy-service-bqjl4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 01:10:11.140328 7 runners.go:190] proxy-service-bqjl4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0530 01:10:12.140572 7 runners.go:190] proxy-service-bqjl4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0530 01:10:13.140914 7 runners.go:190] proxy-service-bqjl4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0530 01:10:14.141328 7 runners.go:190] proxy-service-bqjl4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0530 01:10:15.141585 7 runners.go:190] proxy-service-bqjl4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0530 01:10:16.141915 7 runners.go:190] proxy-service-bqjl4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0530 01:10:17.142167 7 runners.go:190] proxy-service-bqjl4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0530 01:10:18.142479 7 runners.go:190] proxy-service-bqjl4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 30 01:10:18.146: INFO: setup took 11.262649824s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 30 01:10:18.154: INFO: (0) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 7.676152ms) May 30 01:10:18.155: INFO: (0) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 7.863367ms) May 30 01:10:18.159: INFO: (0) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 11.756934ms) May 30 01:10:18.159: INFO: (0) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 11.940382ms) May 30 01:10:18.163: INFO: (0) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 15.909492ms) May 30 01:10:18.163: INFO: (0) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 15.989538ms) May 30 01:10:18.163: INFO: (0) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:1080/proxy/: test<... (200; 16.121791ms) May 30 01:10:18.163: INFO: (0) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 16.246565ms) May 30 01:10:18.167: INFO: (0) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 19.97068ms) May 30 01:10:18.167: INFO: (0) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 20.527822ms) May 30 01:10:18.168: INFO: (0) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 20.738336ms) May 30 01:10:18.168: INFO: (0) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 20.831854ms) May 30 01:10:18.168: INFO: (0) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 20.90281ms) May 30 01:10:18.168: INFO: (0) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 20.903388ms) May 30 01:10:18.168: INFO: (0) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 20.897665ms) May 30 01:10:18.171: INFO: (0) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test (200; 8.858265ms) May 30 01:10:18.180: INFO: (1) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 8.920406ms) May 30 01:10:18.180: INFO: (1) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 9.02051ms) May 30 01:10:18.180: INFO: (1) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 9.132265ms) May 30 01:10:18.181: INFO: (1) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 9.576844ms) May 30 01:10:18.181: INFO: (1) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 9.616737ms) May 30 01:10:18.181: INFO: (1) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test<... (200; 11.296717ms) May 30 01:10:18.183: INFO: (1) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 11.389233ms) May 30 01:10:18.183: INFO: (1) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 11.555717ms) May 30 01:10:18.183: INFO: (1) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 11.602584ms) May 30 01:10:18.183: INFO: (1) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 11.471153ms) May 30 01:10:18.183: INFO: (1) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 11.65271ms) May 30 01:10:18.186: INFO: (2) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 3.214487ms) May 30 01:10:18.187: INFO: (2) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 4.063631ms) May 30 01:10:18.187: INFO: (2) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 3.899466ms) May 30 01:10:18.187: INFO: (2) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 4.01823ms) May 30 01:10:18.187: INFO: (2) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 4.367161ms) May 30 01:10:18.187: INFO: (2) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test<... (200; 5.192206ms) May 30 01:10:18.188: INFO: (2) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 5.37976ms) May 30 01:10:18.188: INFO: (2) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 5.471724ms) May 30 01:10:18.188: INFO: (2) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 5.470223ms) May 30 01:10:18.188: INFO: (2) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 5.471167ms) May 30 01:10:18.189: INFO: (2) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 6.348289ms) May 30 01:10:18.193: INFO: (3) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 4.142771ms) May 30 01:10:18.194: INFO: (3) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 4.573566ms) May 30 01:10:18.194: INFO: (3) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 4.588893ms) May 30 01:10:18.194: INFO: (3) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 4.627311ms) May 30 01:10:18.194: INFO: (3) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 4.606461ms) May 30 01:10:18.194: INFO: (3) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 4.884923ms) May 30 01:10:18.195: INFO: (3) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 5.426886ms) May 30 01:10:18.195: INFO: (3) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 5.524571ms) May 30 01:10:18.195: INFO: (3) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 5.685632ms) May 30 01:10:18.195: INFO: (3) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 6.073517ms) May 30 01:10:18.195: INFO: (3) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 6.0813ms) May 30 01:10:18.195: INFO: (3) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test<... (200; 6.196442ms) May 30 01:10:18.195: INFO: (3) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 6.164363ms) May 30 01:10:18.195: INFO: (3) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 6.194544ms) May 30 01:10:18.196: INFO: (3) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 6.215587ms) May 30 01:10:18.202: INFO: (4) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 6.081794ms) May 30 01:10:18.202: INFO: (4) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 6.016091ms) May 30 01:10:18.202: INFO: (4) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 6.30219ms) May 30 01:10:18.202: INFO: (4) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 6.416999ms) May 30 01:10:18.202: INFO: (4) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test (200; 6.892793ms) May 30 01:10:18.203: INFO: (4) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 6.9727ms) May 30 01:10:18.203: INFO: (4) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:1080/proxy/: test<... (200; 7.033669ms) May 30 01:10:18.203: INFO: (4) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 6.987635ms) May 30 01:10:18.203: INFO: (4) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 7.203408ms) May 30 01:10:18.203: INFO: (4) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 7.184037ms) May 30 01:10:18.203: INFO: (4) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 7.313997ms) May 30 01:10:18.203: INFO: (4) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 7.266756ms) May 30 01:10:18.206: INFO: (4) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 10.251389ms) May 30 01:10:18.212: INFO: (5) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test<... (200; 6.313581ms) May 30 01:10:18.212: INFO: (5) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 6.225729ms) May 30 01:10:18.212: INFO: (5) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 6.248048ms) May 30 01:10:18.212: INFO: (5) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 6.320706ms) May 30 01:10:18.212: INFO: (5) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 6.297557ms) May 30 01:10:18.212: INFO: (5) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 6.279892ms) May 30 01:10:18.212: INFO: (5) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 6.416593ms) May 30 01:10:18.212: INFO: (5) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 6.364802ms) May 30 01:10:18.212: INFO: (5) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 6.359607ms) May 30 01:10:18.212: INFO: (5) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 6.337589ms) May 30 01:10:18.212: INFO: (5) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 6.409455ms) May 30 01:10:18.213: INFO: (5) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 6.400388ms) May 30 01:10:18.217: INFO: (6) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 4.309219ms) May 30 01:10:18.217: INFO: (6) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 4.34786ms) May 30 01:10:18.217: INFO: (6) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 4.740229ms) May 30 01:10:18.218: INFO: (6) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test (200; 4.986649ms) May 30 01:10:18.218: INFO: (6) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:1080/proxy/: test<... (200; 4.911898ms) May 30 01:10:18.218: INFO: (6) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 4.980339ms) May 30 01:10:18.218: INFO: (6) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 5.151797ms) May 30 01:10:18.218: INFO: (6) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 5.249186ms) May 30 01:10:18.218: INFO: (6) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 5.529316ms) May 30 01:10:18.218: INFO: (6) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 5.536875ms) May 30 01:10:18.218: INFO: (6) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 5.492894ms) May 30 01:10:18.218: INFO: (6) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 5.667082ms) May 30 01:10:18.218: INFO: (6) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 5.685953ms) May 30 01:10:18.218: INFO: (6) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 5.708215ms) May 30 01:10:18.219: INFO: (6) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 6.166482ms) May 30 01:10:18.224: INFO: (7) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 4.853291ms) May 30 01:10:18.224: INFO: (7) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 5.038646ms) May 30 01:10:18.224: INFO: (7) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 5.287066ms) May 30 01:10:18.224: INFO: (7) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 5.264295ms) May 30 01:10:18.225: INFO: (7) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 5.745843ms) May 30 01:10:18.225: INFO: (7) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 5.752107ms) May 30 01:10:18.225: INFO: (7) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 5.724377ms) May 30 01:10:18.225: INFO: (7) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:1080/proxy/: test<... (200; 5.673385ms) May 30 01:10:18.225: INFO: (7) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 5.750192ms) May 30 01:10:18.225: INFO: (7) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 5.711377ms) May 30 01:10:18.225: INFO: (7) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 5.697485ms) May 30 01:10:18.225: INFO: (7) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 5.943561ms) May 30 01:10:18.225: INFO: (7) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 5.97248ms) May 30 01:10:18.225: INFO: (7) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test (200; 2.61123ms) May 30 01:10:18.228: INFO: (8) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:1080/proxy/: test<... (200; 2.553947ms) May 30 01:10:18.229: INFO: (8) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 4.197403ms) May 30 01:10:18.229: INFO: (8) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 4.193505ms) May 30 01:10:18.231: INFO: (8) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: ... (200; 5.411191ms) May 30 01:10:18.231: INFO: (8) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 5.475922ms) May 30 01:10:18.231: INFO: (8) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 5.922384ms) May 30 01:10:18.231: INFO: (8) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 5.941367ms) May 30 01:10:18.231: INFO: (8) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 6.005461ms) May 30 01:10:18.231: INFO: (8) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 5.945001ms) May 30 01:10:18.231: INFO: (8) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 5.943933ms) May 30 01:10:18.231: INFO: (8) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 6.035368ms) May 30 01:10:18.231: INFO: (8) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 5.995176ms) May 30 01:10:18.235: INFO: (9) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 3.672259ms) May 30 01:10:18.235: INFO: (9) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 3.901858ms) May 30 01:10:18.235: INFO: (9) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 4.038255ms) May 30 01:10:18.235: INFO: (9) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 4.137059ms) May 30 01:10:18.235: INFO: (9) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 4.054097ms) May 30 01:10:18.236: INFO: (9) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 4.177957ms) May 30 01:10:18.236: INFO: (9) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 4.224807ms) May 30 01:10:18.236: INFO: (9) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 4.250786ms) May 30 01:10:18.236: INFO: (9) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 4.308065ms) May 30 01:10:18.236: INFO: (9) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test<... (200; 5.227285ms) May 30 01:10:18.237: INFO: (9) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 5.215284ms) May 30 01:10:18.240: INFO: (10) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 3.345462ms) May 30 01:10:18.240: INFO: (10) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:1080/proxy/: test<... (200; 3.427896ms) May 30 01:10:18.240: INFO: (10) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 3.48616ms) May 30 01:10:18.240: INFO: (10) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: ... (200; 4.762255ms) May 30 01:10:18.242: INFO: (10) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 4.739865ms) May 30 01:10:18.242: INFO: (10) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 4.703738ms) May 30 01:10:18.242: INFO: (10) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 4.945845ms) May 30 01:10:18.242: INFO: (10) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 4.805639ms) May 30 01:10:18.242: INFO: (10) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 4.78812ms) May 30 01:10:18.244: INFO: (11) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 2.823931ms) May 30 01:10:18.245: INFO: (11) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 3.280061ms) May 30 01:10:18.245: INFO: (11) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 3.271542ms) May 30 01:10:18.246: INFO: (11) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 3.457676ms) May 30 01:10:18.246: INFO: (11) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 3.209ms) May 30 01:10:18.246: INFO: (11) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test<... (200; 3.674794ms) May 30 01:10:18.247: INFO: (11) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 4.163307ms) May 30 01:10:18.247: INFO: (11) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 5.133384ms) May 30 01:10:18.247: INFO: (11) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 5.276192ms) May 30 01:10:18.247: INFO: (11) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 4.893891ms) May 30 01:10:18.247: INFO: (11) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 5.09818ms) May 30 01:10:18.248: INFO: (11) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 5.651952ms) May 30 01:10:18.248: INFO: (11) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 5.7986ms) May 30 01:10:18.252: INFO: (12) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 3.18091ms) May 30 01:10:18.252: INFO: (12) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 3.286084ms) May 30 01:10:18.252: INFO: (12) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:1080/proxy/: test<... (200; 3.608218ms) May 30 01:10:18.252: INFO: (12) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 3.664584ms) May 30 01:10:18.252: INFO: (12) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 3.802736ms) May 30 01:10:18.252: INFO: (12) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 3.901203ms) May 30 01:10:18.252: INFO: (12) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 3.904159ms) May 30 01:10:18.252: INFO: (12) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test<... (200; 2.514264ms) May 30 01:10:18.256: INFO: (13) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 2.554146ms) May 30 01:10:18.263: INFO: (13) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 9.469534ms) May 30 01:10:18.263: INFO: (13) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 9.740666ms) May 30 01:10:18.263: INFO: (13) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 9.907428ms) May 30 01:10:18.263: INFO: (13) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 9.934753ms) May 30 01:10:18.263: INFO: (13) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 10.004855ms) May 30 01:10:18.263: INFO: (13) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 10.0225ms) May 30 01:10:18.263: INFO: (13) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 10.02693ms) May 30 01:10:18.263: INFO: (13) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 10.106194ms) May 30 01:10:18.263: INFO: (13) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 10.129597ms) May 30 01:10:18.263: INFO: (13) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 10.154128ms) May 30 01:10:18.263: INFO: (13) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test<... (200; 5.460752ms) May 30 01:10:18.269: INFO: (14) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 5.374934ms) May 30 01:10:18.269: INFO: (14) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 5.62223ms) May 30 01:10:18.269: INFO: (14) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 5.496879ms) May 30 01:10:18.269: INFO: (14) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test (200; 5.598952ms) May 30 01:10:18.269: INFO: (14) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 5.726972ms) May 30 01:10:18.269: INFO: (14) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 5.694114ms) May 30 01:10:18.271: INFO: (15) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 1.882756ms) May 30 01:10:18.274: INFO: (15) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:1080/proxy/: test<... (200; 4.334078ms) May 30 01:10:18.274: INFO: (15) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 4.234923ms) May 30 01:10:18.274: INFO: (15) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 4.337362ms) May 30 01:10:18.274: INFO: (15) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 4.284023ms) May 30 01:10:18.274: INFO: (15) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 4.3626ms) May 30 01:10:18.274: INFO: (15) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 4.44577ms) May 30 01:10:18.274: INFO: (15) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 4.519935ms) May 30 01:10:18.274: INFO: (15) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 4.711911ms) May 30 01:10:18.274: INFO: (15) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 4.632794ms) May 30 01:10:18.274: INFO: (15) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 4.723703ms) May 30 01:10:18.274: INFO: (15) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: ... (200; 3.234908ms) May 30 01:10:18.278: INFO: (16) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 3.589162ms) May 30 01:10:18.278: INFO: (16) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 3.849774ms) May 30 01:10:18.279: INFO: (16) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:1080/proxy/: test<... (200; 4.017711ms) May 30 01:10:18.279: INFO: (16) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 3.98932ms) May 30 01:10:18.279: INFO: (16) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 4.109567ms) May 30 01:10:18.279: INFO: (16) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 4.052489ms) May 30 01:10:18.279: INFO: (16) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 4.189794ms) May 30 01:10:18.279: INFO: (16) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 4.477356ms) May 30 01:10:18.279: INFO: (16) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 4.589525ms) May 30 01:10:18.279: INFO: (16) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test (200; 4.517923ms) May 30 01:10:18.279: INFO: (16) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 4.693932ms) May 30 01:10:18.279: INFO: (16) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 4.651518ms) May 30 01:10:18.279: INFO: (16) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 4.84653ms) May 30 01:10:18.283: INFO: (17) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 3.485321ms) May 30 01:10:18.283: INFO: (17) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 3.863988ms) May 30 01:10:18.283: INFO: (17) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:1080/proxy/: test<... (200; 3.908975ms) May 30 01:10:18.283: INFO: (17) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 3.908103ms) May 30 01:10:18.283: INFO: (17) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 3.917292ms) May 30 01:10:18.283: INFO: (17) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:1080/proxy/: ... (200; 3.901019ms) May 30 01:10:18.283: INFO: (17) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 4.000759ms) May 30 01:10:18.283: INFO: (17) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 3.95056ms) May 30 01:10:18.283: INFO: (17) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: ... (200; 2.687939ms) May 30 01:10:18.288: INFO: (18) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 2.718758ms) May 30 01:10:18.288: INFO: (18) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:160/proxy/: foo (200; 3.219345ms) May 30 01:10:18.288: INFO: (18) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:1080/proxy/: test<... (200; 3.078743ms) May 30 01:10:18.288: INFO: (18) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 2.979307ms) May 30 01:10:18.288: INFO: (18) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 3.53341ms) May 30 01:10:18.288: INFO: (18) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 3.781861ms) May 30 01:10:18.288: INFO: (18) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 3.612172ms) May 30 01:10:18.289: INFO: (18) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 3.755857ms) May 30 01:10:18.289: INFO: (18) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 4.260349ms) May 30 01:10:18.289: INFO: (18) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 4.390919ms) May 30 01:10:18.289: INFO: (18) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 4.79452ms) May 30 01:10:18.289: INFO: (18) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 4.710834ms) May 30 01:10:18.290: INFO: (18) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 4.938222ms) May 30 01:10:18.290: INFO: (18) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 4.704991ms) May 30 01:10:18.290: INFO: (18) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: ... (200; 3.624435ms) May 30 01:10:18.293: INFO: (19) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q/proxy/: test (200; 3.588256ms) May 30 01:10:18.293: INFO: (19) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:443/proxy/: test<... (200; 3.616854ms) May 30 01:10:18.293: INFO: (19) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:162/proxy/: bar (200; 3.710967ms) May 30 01:10:18.293: INFO: (19) /api/v1/namespaces/proxy-6727/pods/proxy-service-bqjl4-7945q:160/proxy/: foo (200; 3.629656ms) May 30 01:10:18.294: INFO: (19) /api/v1/namespaces/proxy-6727/pods/http:proxy-service-bqjl4-7945q:162/proxy/: bar (200; 4.029158ms) May 30 01:10:18.294: INFO: (19) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:462/proxy/: tls qux (200; 4.319352ms) May 30 01:10:18.294: INFO: (19) /api/v1/namespaces/proxy-6727/pods/https:proxy-service-bqjl4-7945q:460/proxy/: tls baz (200; 4.416804ms) May 30 01:10:18.295: INFO: (19) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname2/proxy/: bar (200; 5.154757ms) May 30 01:10:18.295: INFO: (19) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname1/proxy/: tls baz (200; 5.291508ms) May 30 01:10:18.295: INFO: (19) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname1/proxy/: foo (200; 5.403096ms) May 30 01:10:18.295: INFO: (19) /api/v1/namespaces/proxy-6727/services/https:proxy-service-bqjl4:tlsportname2/proxy/: tls qux (200; 5.37615ms) May 30 01:10:18.295: INFO: (19) /api/v1/namespaces/proxy-6727/services/http:proxy-service-bqjl4:portname2/proxy/: bar (200; 5.368472ms) May 30 01:10:18.295: INFO: (19) /api/v1/namespaces/proxy-6727/services/proxy-service-bqjl4:portname1/proxy/: foo (200; 5.521029ms) STEP: deleting ReplicationController proxy-service-bqjl4 in namespace proxy-6727, will wait for the garbage collector to delete the pods May 30 01:10:18.354: INFO: Deleting ReplicationController proxy-service-bqjl4 took: 6.661366ms May 30 01:10:18.654: INFO: Terminating ReplicationController proxy-service-bqjl4 pods took: 300.260189ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 30 01:10:24.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6727" for this suite. • [SLOW TEST:18.169 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":288,"skipped":4723,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 30 01:10:24.966: INFO: Running AfterSuite actions on all nodes May 30 01:10:24.966: INFO: Running AfterSuite actions on node 1 May 30 01:10:24.966: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5517.233 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS